I am using docker 1.12 and rancher server 1.5.9. I am trying to create a stack in rancher to deploy and orchestrate my app. My issue is that I need to pass as env var the hostname of the host where the container will be running.
Since I have only one image that will be used to create one kind of container on several host (let's say 2 for the tests) I can't pass it like HOSTNAME=myhostname. The value needs to be a var which will be set with the docker host.
Does anyone know how to do that with the rancher server UI?
Does anyone know how rancher retrieve the hostname when adding a custom host?
Can we use the entry point or CMD to do that?
Having an /etc/hosts on the machine that prioritizes the desired name over localhost helped in my case. Obviously, also have an /etc/hostname that agrees with /etc/hosts.
I am using container linux. So for me, it looks like so in the ct-config before converting to ignition.
storage:
files:
- filesystem: "root"
path: "/etc/hostname"
mode: 0644
contents:
inline: ${hostname}
- filesystem: "root"
path: "/etc/hosts"
mode: 0644
contents:
inline: "127.0.0.1 ${hostname} localhost\n
::1 ${hostname} localhost"
Just be sure to have the above before you run the rancher registration line.
sudo docker run --rm --privileged \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /var/lib/rancher:/var/lib/rancher \
rancher/agent:v1.2.7 https://myrancher/v1/scripts/TOKEN
Related
Is there any proper way of restarting an entire docker compose stack from within one of its containers?
One workaround involves mounting the docker socket:
volumes:
- /var/run/docker.sock:/var/run/docker.sock
and then use the Docker Engine SDKs (https://docs.docker.com/engine/api/sdk/examples/).
However, this solution only allows restarting the containers itselves. There seems to be no way to send compose commands, like docker compose restart, docker compose up, etc.
The only solution I've found to send docker compose commands is to open a terminal on the host from the container using ssh, like this: access host's ssh tunnel from docker container
This is partly related to How to run shell script on host from docker container? , but I'm actually looking for a more specific solution to only send docker compose commands.
I tried with this simple docker-compose.yml file
version: '3'
services:
nginx:
image: nginx
ports:
- 3000:80
Then I started a docker container using
docker run -it --rm -v /var/run/docker.sock:/var/run/docker.sock -v $(pwd):/work docker
Then, inside the container, I did
cd /work
docker-compose up -d
and it started the container up on the host.
Please note that you have an error in your socket mapping. It needs to be
- /var/run/docker.sock:/var/run/docker.sock
(you have a period instead of a slash at one point)
As mentioned by #BMitch in the comments, compose project name was the reason why I wasn't able to run docker compose commands inside the running container.
By default the compose project name is set to the directory name, so if the docker-compose.yml is run from a host directory named folder1, then the commands inside the container should be run as:
docker-compose -p folder1 ...
So now, for example, restarting the stack works:
docker-compose -p folder1 restart
Just as a reference, a fixed project name for your compose can be set using name: ... as a top-level attribute of the .yml file, but requires docker compose v2.3.3 : Set $PROJECT_NAME in docker-compose file
I have a compose file with v3 where there are 3 services sharing/using the same volume. While using swarm mode we need to create extra containers & volumes to manage our services across the cluster.
I am planning to use NFS server so that single NFS share will get mounted directly on all the hosts within the cluster.
I have found below two ways of doing it but it needs extra steps to be performed on the docker host -
Mount the NFS share using "fstab" or "mount" command on the host & then use it as a host volume for docker services.
Use Netshare plugin - https://github.com/ContainX/docker-volume-netshare
Is there a standard way where i can directly use/mount NFS share using docker compose v3 by performing only few/no steps(I understand that "nfs-common" package is required anyhow) on the docker host?
After discovering that this is massively undocumented,here's the correct way to mount a NFS volume using stack and docker compose.
The most important thing is that you need to be using version: "3.2" or higher. You will have strange and un-obvious errors if you don't.
The second issue is that volumes are not automatically updated when their definition changes. This can lead you down a rabbit hole of thinking that your changes aren't correct, when they just haven't been applied. Make sure you docker rm VOLUMENAME everywhere it could possibly be, as if the volume exists, it won't be validated.
The third issue is more of a NFS issue - The NFS folder will not be created on the server if it doesn't exist. This is just the way NFS works. You need to make sure it exists before you do anything.
(Don't remove 'soft' and 'nolock' unless you're sure you know what you're doing - this stops docker from freezing if your NFS server goes away)
Here's a complete example:
[root#docker docker-mirror]# cat nfs-compose.yml
version: "3.2"
services:
rsyslog:
image: jumanjiman/rsyslog
ports:
- "514:514"
- "514:514/udp"
volumes:
- type: volume
source: example
target: /nfs
volume:
nocopy: true
volumes:
example:
driver_opts:
type: "nfs"
o: "addr=10.40.0.199,nolock,soft,rw"
device: ":/docker/example"
[root#docker docker-mirror]# docker stack deploy --with-registry-auth -c nfs-compose.yml rsyslog
Creating network rsyslog_default
Creating service rsyslog_rsyslog
[root#docker docker-mirror]# docker stack ps rsyslog
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
tb1dod43fe4c rsyslog_rsyslog.1 jumanjiman/rsyslog:latest swarm-4 Running Starting less than a second ago
[root#docker docker-mirror]#
Now, on swarm-4:
root#swarm-4:~# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d883e0f14d3f jumanjiman/rsyslog:latest "rsyslogd -n -f /e..." 6 seconds ago Up 5 seconds 514/tcp, 514/udp rsyslog_rsyslog.1.tb1dod43fe4cy3j5vzsy7pgv5
root#swarm-4:~# docker exec -it d883e0f14d3f df -h /nfs
Filesystem Size Used Available Use% Mounted on
:/docker/example 7.2T 5.5T 1.7T 77% /nfs
root#swarm-4:~#
This volume will be created (but not destroyed) on any swarm node that the stack is running on.
root#swarm-4:~# docker volume inspect rsyslog_example
[
{
"CreatedAt": "2017-09-29T13:53:59+10:00",
"Driver": "local",
"Labels": {
"com.docker.stack.namespace": "rsyslog"
},
"Mountpoint": "/var/lib/docker/volumes/rsyslog_example/_data",
"Name": "rsyslog_example",
"Options": {
"device": ":/docker/example",
"o": "addr=10.40.0.199,nolock,soft,rw",
"type": "nfs"
},
"Scope": "local"
}
]
root#swarm-4:~#
Depending on how I need to use the volume, I have the following 3 options.
First, you can create the named volume directly and use it as an external volume in compose, or as a named volume in a docker run or docker service create command.
# create a reusable volume
$ docker volume create --driver local \
--opt type=nfs \
--opt o=nfsvers=4,addr=nfs.example.com,rw \
--opt device=:/path/to/dir \
foo
Next, there is the --mount syntax that works from docker run and docker service create. This is a rather long option, and when you are embedded a comma delimited option within another comma delimited option, you need to pass some quotes (escaped so the shell doesn't remove them) to the command being run. I tend to use this for a one-off container that needs to access NFS (e.g. a utility container to setup NFS directories):
# or from the docker run command
$ docker run -it --rm \
--mount type=volume,dst=/container/path,volume-driver=local,volume-opt=type=nfs,\"volume-opt=o=nfsvers=4,addr=nfs.example.com\",volume-opt=device=:/host/path \
foo
# or to create a service
$ docker service create \
--mount type=volume,dst=/container/path,volume-driver=local,volume-opt=type=nfs,\"volume-opt=o=nfsvers=4,addr=nfs.example.com\",volume-opt=device=:/host/path \
foo
Lastly, you can define the named volume inside your compose file. One important note when doing this, the name volume only gets created once, and not updated with any changes. So if you ever need to modify the named volume you'll want to give it a new name.
# inside a docker-compose file
...
services:
example-app:
volumes:
- "nfs-data:/data"
...
volumes:
nfs-data:
driver: local
driver_opts:
type: nfs
o: nfsvers=4,addr=nfs.example.com,rw
device: ":/path/to/dir"
...
In each of these examples:
Type is set to nfs, not nfs4. This is because docker provides some nice functionality on the addr field, but only for the nfs type.
The o are the options that gets passed to the mount syscall. One difference between the mount syscall and the mount command in Linux is the device has the portion before the : moved into an addr option.
nfsvers is used to set the NFS version. This avoids delays as the OS tries other NFS versions first.
addr may be a DNS name when you use type=nfs, rather than only an IP address. Very useful if you have multiple VPC's with different NFS servers using the same DNS name, or if you want to adjust the NFS server in the future without updating every volume mount.
Other options like rw (read-write) can be passed to the o option.
The device field is the path on the remote NFS server. The leading colon is required. This is an artifact of how the mount command moves the IP address to the addr field for the syscall. This directory must exist on the remote host prior to the volume being mounted into a container.
In the --mount syntax, the dst field is the path inside the container. For named volumes, you set this path on the right side of the volume mount (in the short syntax) on your docker run -v command.
If you get permission issues accessing a remote NFS volume, a common cause I've encountered is containers running as root, with the NFS server set to root squash (changing all root access to the nobody user). You either need to configure your containers to run as a well known non-root UID that has access to the directories on the NFS server, or disable root squash on the NFS server.
Yes you can directly reference an NFS from the compose file:
volumes:
db-data:
driver: local
driver_opts:
type: nfs
o: addr=$SOMEIP,rw
device: ":$PathOnServer"
And in an analogous way you could create an nfs volume on each host.
docker volume create --driver local --opt type=nfs --opt o=addr=$SomeIP,rw --opt device=:$DevicePath --name nfs-docker
My solution for AWS EFS, that works:
Create EFS (don't forget to open NFS port 2049 at security group)
Install nfs-common package:
sudo apt-get install -y nfs-common
Check if your efs works:
mkdir efs-test-point
sudo chmod go+rw efs-test-point
sudo mount -t nfs -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport [YOUR_EFS_DNS]:/ efs-test-point
touch efs-test-point/1.txt
sudo umount efs-test-point/
ls -la efs-test-point/
directory must be empty
sudo mount -t nfs -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport [YOUR_EFS_DNS]:/ efs-test-point
ls -la efs-test-point/
file 1.txt must exists
Configure docker-compose.yml file:
services:
sidekiq:
volumes:
- uploads_tmp_efs:/home/application/public/uploads/tmp
...
volumes:
uploads_tmp_efs:
driver: local
driver_opts:
type: nfs
o: addr=[YOUR_EFS_DNS],nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2
device: [YOUR_EFS_DNS]:/
My problem was solved with changing driver option type to NFS4.
volumes:
my-nfs-share:
driver: local
driver_opts:
type: "nfs4"
o: "addr=172.24.0.107,rw"
device: ":/mnt/sharedwordpress"
If you are using AutoFS too, on docker-compose you may add :shared to all paths, like this:
volumes:
- /some/nfs/mounted:/path:shared
I found this a better approach to my case thanks to a colleague. Our users were having an error stating 'too many symbolic links'...
Cheers!
I'm running a container with jenkins using "docker outside of docker". My docker compose is:
---
version: '2'
services:
jenkins-master:
build:
context: .
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /dev/urandom:/dev/random
- /home/jj/jenkins/jenkins_home/:/var/jenkins_home
ports:
- "8080:8080"
So all containers launched from "jenkins container" are running in host machine.
But when I try to run docker-compose in "jenkins container" in a job thats needs a volume, it takes the path from host instead of jenkins. I mean, when I run docker-compose with
volumes:
- .:/app
It is mounted in /var/jenkins_home/workspace/JOB_NAME in the host but I want that it is mounted in /home/jj/jenkins/jenkins_home/workspace/JOB_NAME
Any idea for doing this with a "clean" mode?
P.D.: I did a workaround using environments variables.
Docker on the host will map the path as is from the request, and docker-compose will make the request with the path it sees inside the container. This leaves you with a few options:
Don't use host volumes in your builds. If you need volumes, you can use named volumes and use docker io to read in and out of those volumes. That would look like:
tar -cC data . | docker run -i --rm -v app-data:/target busybox /bin/sh -c "tar -xC /target". You'd reverse the docker/tar commands to pull data back out.
Make the path on the host match that of the container. On your host, if you have access to make a symlink in var, you can ln -s /home/jj/jenkins/jenkins_home /var/jenkins_home and then update your compose file to have the same path (you may need to specify /var/jenkins_home/. to follow the symlink).
Make the path of the container match that of the host. This may be the easiest option, but I'm not positive it would work (depends on where compose thinks it's running). Your Dockerfile for the jenkins master can include the following:
RUN mkdir -p /home/jj/jenkins \
&& ln -s /var/jenkins_home /home/jj/jenkins/jenkins_home
ENV JENKINS_HOME /home/jj/jenkins/jenkins_home
If the easy option doesn't work, you can rebuild the image from jenkins and change the JENKINS_HOME variable to match your environment.
Make your compose paths absolute. You can add some code to set a variable:
export CUR_DIR=$(pwd | sed 's#/var/jenkins_home#/home/jj/jenkins/jenkins_home#'). Then you can set your volume with that variable:
volumes:
- ${CUR_DIR:-.}:/app
I'm unsure if something obvious escapes me or if it's just not possible but I'm trying to compose an entire application stack with images from docker hub.
One of them is mysql and it supports adding custom configuration files through volumes and to run .sql-files from a mounted directory.
But, I have these files on the machine where I'm running docker-compose, not on the host. Is there no way to specify files from the local machine to copy into the container before it runs it entrypoint/cmd? Do I really have to create local images of everything just for this case?
Option A: Include the files inside your image. This is less than ideal since you are mixing configuration files with your image (that should really only contain your binaries, not your config), but satisfies the requirement to use only docker-compose to send the files.
This option is achieved by using docker-compose to build your image, and that build will send over any files from the build directory to the remote docker engine. Your docker-compose.yml would look like:
version: '2'
services:
my-db-app:
build: db/.
image: custom-db
And db/Dockerfile would look like:
FROM mysql:latest
COPY ./sql /sql
The entrypoint/cmd would remain unchanged. You would need to run docker-compose up --build if the image already exists and you need to change the sql files.
Option B: Use a volume to store your data. This cannot be done directly inside of docker-compose. However it's the preferred way to include files from outside of the image into the container. You can populate the volume across the network by using the docker CLI and input redirection along with a command like tar to pack and unpack those files being sent over stdin:
tar -cC sql . | docker run --rm -it -v sql-files:/sql \
busybox /bin/sh -c "tar -xC /sql"
Run that via a script and then have that same script bounce the db container to reload that config.
Option C: Use some kind of network attached filesystem. If you can configure NFS on the host where you are running your docker CLI, you can connect to those NFS shares from the remote docker node using one of the below options:
# create a reusable volume
$ docker volume create --driver local \
--opt type=nfs \
--opt o=addr=192.168.1.1,rw \
--opt device=:/path/to/dir \
foo
# or from the docker run command
$ docker run -it --rm \
--mount type=volume,dst=/container/path,volume-driver=local,volume-opt=type=nfs,volume-opt=o=addr=192.168.1.1,volume-opt=device=:/host/path \
foo
# or to create a service
$ docker service create \
--mount type=volume,dst=/container/path,volume-driver=local,volume-opt=type=nfs,volume-opt=o=addr=192.168.1.1,volume-opt=device=:/host/path \
foo
Option D: With swarm mode, you can include files as configs in your image. This allows configuration files, that would normally need to be pushed to any node in the swarm, to be sent on demand to the node where your service is running. This uses a docker-compose.yml file to define it, but swarm mode isn't using docker-compose itself, so this may not fit your specific requirements. You can run a single node swarm mode cluster, so this option is available even if you only have a single node. This option does require that each of your sql files are added as a separate config. The docker-compose.yml would look like:
version: '3.4'
configs:
sql_file_1:
file: ./file_1.sql
services
my-db-app:
image: my-db-app:latest
configs:
- source: sql_file_1
target: /sql/file_1.sql
mode: 444
Then instead of a docker-compose up, you'd run a docker stack deploy -c docker-compose.yml my-db-stack.
If you can not use volumes (wants stateless docker-compose.yml and using remote machine), you can have config file written by command.
Example for nginx config in official image:
version: "3.7"
services:
nginx:
image: nginx:alpine
ports:
- 80:80
environment:
NGINX_CONFIG: |
server {
server_name "~^www\.(.*)$$" ;
return 301 $$scheme://$$1$$request_uri ;
}
server {
server_name example.com
...
}
command:
/bin/sh -c "echo \"$$NGINX_CONFIG\" > /etc/nginx/conf.d/redir.conf; nginx -g \"daemon off;\""
Environment variable could also be saved in .env file, you can use Compose's extend feature or load it from shell environment (where you fetched it from enywhere else):
https://docs.docker.com/compose/compose-file/#env_file
https://docs.docker.com/compose/compose-file/#variable-substitution
To get the original entrypoint command of a container:
docker container inspect [container] | jq --raw-output .[0].Config.Cmd
To investigate which file to modify this usually will work:
docker exec --interactive --tty [container] sh
This is how I'm doing it with volumes:
services:
my-db-app:
command: /shell_scripts/go.sh
volumes:
- ./shell_scripts:/shell_scripts
i think you had to do in a compose file:
volumes:
- src/file:dest/path
As a more recent update to this question: with a docker swarm hosted on Amazon, for example, you can define a volume that can be shared by services and is available across all nodes of the swarm (using the cloudstor driver, which in turn has AWS EFS underlying for persistence).
version: '3.3'
services:
my-db-app:
command: /shell_scripts/go.sh
volumes:
shell_scripts:/shell_scripts
volumes:
shell_scripts:
driver: "cloudstor:aws"
With Compose V2 you can simply do (as in the documentation) :
docker compose cp src [service:]dest
Before v2 you can use the workaround using docker cp explained in the associated issue
docker cp /path/to/my-local-file.sql "$(docker-compose ps -q mycontainer)":/file-on-container.sql
I'm using weave to launch some containers which form a database cluster. I have gotten this working manually on two hosts in EC2 by doing the following:
$HOST1> weave launch
$HOST2> weave launch $HOST1
$HOST1> eval $(weave env)
$HOST2> eval $(weave env)
$HOST1> docker run --name neo-1 -d -P ... my/neo4j-cluster
$HOST2> docker run --name neo-2 -d -P ... my/neo4j-cluster
$HOST3> docker run --name neo-1 -d -P -e ARBITER=true ... my/neo4j-cluster
I can check the logs and everthing starts up ok.
When using ansible I can get the above to work using the command: ... module and an environment variable:
- name: Start Neo Arbiter
command: 'docker run --name neo-2 -d -P ... my/neo4j-cluster'
environment:
DOCKER_HOST: 'unix:///var/run/weave/weave.sock'
As that's basically all eval $(weave env) does.
But when I use the docker module for ansible, even with the docker_url parameter set to the same thing you see above with DOCKER_HOST, DNS does not resolve between hosts. Here's what that looks like:
- name: Start Neo Arbiter
docker:
image: "my/neo4j-cluster:{{neo4j_version}}"
docker_url: unix:///var/run/weave/weave.sock
name: neo-3
pull: missing
state: reloaded
detach: True
publish_all_ports: True
OR
- name: Start Neo Arbiter
docker:
image: "my/neo4j-cluster:{{neo4j_version}}"
docker_url: unix:///var/run/weave/weave.sock
name: neo-3
pull: missing
state: reloaded
detach: True
publish_all_ports: True
environment:
DOCKER_HOST: 'unix:///var/run/weave/weave.sock'
Neither of those work. The DNS does not resolve so the servers never start. I do have other server options (like SERVER_ID for neo4j, etc set just not shown here for simplicity).
Anyone run into this? I know the docker module for ansible uses docker-py and stuff. I wonder if there's some type of incompatibility with weave?
EDIT
I should mention that when the containers launch they actually show up in WeaveDNS and appear to have been added to the system. I can ping the local hostname of each container as long as its on the host. When I go to the other host though, it cannot ping the ones on the other host. This despite them registering in WeaveDNS (weave status dns) and weave status showing correct # of peers and established connections.
This could be caused by the client sending a HostConfig struct in the Docker start request, which is not really how you're supposed to do it but is supported by Docker "for backwards compatibility".
Weave has been fixed to cope, but the fix is not in a released version yet. You could try the latest snapshot version if you're brave.
You can probably kludge it by explicitly setting the DNS resolver to the docker bridge IP in your containers' config - weave has an undocumented helper weave docker-bridge-ip to find this address, and it generally won't change.