How can I trigger a script in my host machine from a docker container? - docker

I have a script in my host machine that needs to run everytime an action occurs within a docker container (running a Django Rest API). This script depends on many local files and environment variables, so it is not possible for me to just map everything (volumes and env vars) from the host to the container in order for it to be called inside it. It must be called from the host machine. After it is executed, then it will generate some output files that will be read and used from the container (through a mounted volume).
Is there any way I can achieve this? I've seen lots of comments about using docker socket and mapping volumes, but it never seems to be suitable to this case.
Thanks!

Related

Gitlab Runner, docker executor, deploy from linux container to CIFS share

I have a Gitlab runner that runs all kind of jobs using Docker executors (host is Ubuntu 20, guests are various Linux images). The runner runs containers as unprivileged.
I am stumped on an apparently simple requirement - I need to deploy some artifacts on a Windows machine that exposes the target path as an authenticated share (\\myserver\myapp). Nothing more than replacing files on the target with the ones on the source - a simple rsync would be fine.
Gitlab Runner does not allow specifying mounts in the CI config (see https://gitlab.com/gitlab-org/gitlab-runner/-/issues/28121), so I tried using mount.cifs, but I discovered that by default Docker does not allow mounting anything inside the container unless running privileged, which I would like to avoid.
I also tried the suggestion to use --cap-add as described at Mount SMB/CIFS share within a Docker container but they do not seem to be enough for my host, there are probably other required capabilities and I have no idea how to identify them. Also, this looks just slightly less ugly than running privileged.
Now, I do not strictly need to mount the remote folder - if there were an SMB-aware rsync command for example I would be more than happy to use that. Unfortunately I cannot install anything on the Windows machine (no SSH, no SCP, no FTP).
Do you have any idea on how achieve this?
Unfortunately I cannot install anything on the Windows machine (no SSH, no SCP, no FTP).
You could simply copy an executable (that you can build anywhere else) in Go, listening to a port, ready to receive a file.
This this implementation for instance: file-receive.go. It listens on port 8080 (can be changed) and copies the file content to a local folder.
No installation or setup required: just copy the exe to the target machine and run it.
From your GitLab runner, you can 'curl' send a file to the remote PC machine, port 8080.

Defining Docker runtime flags from within Dockerfile

We're running Docker on Digital Ocean App Engine, and can't pass flags (e.g. --cpus) to docker run.
We can't use docker-compose either.
Is it possible to set an environment variable (ARG? ENV?), e.g. $CPUS=blah) in a way that can be picked up by the Docker instance?
Stated differently, are there internal environment variables that correspond to specific flags, that can be set from with a Dockerfile / environment itself?
As we can see in the very first steps of official documentation of Docker (this link):
Docker provides the ability to package and run an application in a loosely isolated environment called a container. The isolation and security allow you to run many containers simultaneously on a given host. Containers are lightweight and contain everything needed to run the application, so you do not need to rely on what is currently installed on the host. You can easily share containers while you work, and be sure that everyone you share with gets the same container that works in the same way.
The main concept of Docker is to isolate each container from the other container, environment variables and anything that relates to the them. So the only thing we can access and modify them is what that reside outside of the container like:
exposing ports
exposing volumes
map container port to the host port
map container volume to the host volume and vice-versa
...

Is there a way to get an optional bind mount in docker swarm

I have a swarm service that bind-mounts a file that may not exist. If the file does not exist the service fails to deploy (and I get logs complaining about the missing file). I would prefer to have the service deploy anyway, just missing that mount. Is there a way to let that happen?
The being being mounted is a unix socket to a local memcached instance. The app can run without it and we don't run memcached on every node, so I'd like to allow the service to deploy even if the bind mount fails (if the ideal node goes down and the service has to move to another node that doesn't run memcached).
I realize I could move the mount point to a directory that will always exist on every host machine, but I'd prefer to keep the bind mount exposure minimal if possible.
Recently I had a similar scenario and I implemented a NFS server in one node and then I mount it in every swarm node. So, I always have files in the same path.

Can I use vim from host to edit files inside docker container? If yes then how?

I have a project on which we have set up a development environment using docker-compose. We are using volumes to sync files from host to docker containers. The performance for sync is kinda bad on Mac.
I recently saw some extension for VS code which allows you to edit files inside the docker container. Here is the link to that extension.
Can I do something similar with vim?
Thanks a lot in advance!
I tried to ssh into docker container but I wasn't successful. I will have to use docker exec to ssh into it.
You can use vim to edit the files remotely provided that you have the SSH access to that container. In order to get that, you'll have to generate and place SSH keys on your machine and the public keys inside the container. But, you need to have SSH server running inside the container and the SSH port exposed.
If editing the file from the host machine using Vim is the absolute requirement, this is the way to go.
But, if you only want to make the debugging easier, consider using Bind mounts. You bind the target file from your host machiner and edit it locally. The container accessing your file will immediately see the changes reflected inside it.

docker containers communication on dev machine

I have a container that runs a simple service that requires a connection to elasticsearch. For this I need to provide my service with the address of elasticsearch. I am confused as to how I can create a container that can be used in production and on my local machine (mac). How are people providing configuration like this these days?
So far I have come up with having my process take environmental variables as arguments which I can pass to the container with docker run -e. It seems unlikely that I would be doing this type of thing in production.
I have a container that runs a simple service that requires a connection to elasticsearch. For this I need to provide my service with the address of elasticsearch
If elasticsearch is running in its own container on the same host (managed by the same docker daemon), then you can link it to your own container (at the docker run stage) with the --link option (which sets environment variables)
docker run --link elasticsearch:elasticsearch --name <yourContainer> <yourImage>
See "Linking containers together"
In that case, your container config can be static and known/written in advance, as it will refer to the search machine as 'elasticsearch'.
How about writing it into the configuration file of your application and mount the configuration directory onto your container with -v?
To make it more organized, I use Ansible for orchestration. This way you could have a template of the configuration file for your application while the actually parameters are in the variable file of the corresponding Ansible playbook at a centralized location. Ansible will be in charge of copying the template over to the desired location and do variable substitution for you. It also recently enhanced its Docker support.
Environment variables are absolutely fine (we use them all the time for this sort of thing) as long as you're using service names, not ip addresses. Even with ip addresses you'd have no problem as long as you only have one ES and you're willing to restart your service every time the ES ip address changes.
You should really ask someone who knows for sure how you resolve these things in your production environments, because you're unlikely to be the only person in your org who has had this problem -- connecting to a database poses the same problem.
If you have no constraints at all then you should check out something like Consul from Hashicorp. It'll help you a lot with this problem; if you are allowed to use it.

Resources