I am happily deploying a Cloudflared Tunnel on Kubernetes with YAML that looks like this. This deploys the Tunnel itself just fine - however, updating a Cloudflared tunnel also requires updating Cloudflare's DNS records so that the domain name will point to the tunnel, and I'm looking for a way to automate that.
The cloudflared tool can do this when provided with the right arguments (cloudflared tunnel route dns <tunnelID> <hostname>) which suggests that I could carry out this pre-deployment step with an initContainer, if I could parse the tunnel's config YAML and convert the list of domain names into commands. However, the cloudflare/cloudflared image does not appear to have any shell available, so I can't do something like grep '^- hostname: ' config.yaml | perl -pe 's/- hostname: //' | xargs -I {} cloudflared tunnel route dns <name> {}':
$ docker run --entrypoint /bin/sh cloudflare/cloudflared
docker: Error response from daemon: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "/bin/sh": stat /bin/sh: no such file or directory: unknown.
$ docker run cloudflare/cloudflared /bin/sh
[ the /bin/sh argument appears to be ignored - the image continues with its usual behaviour]
This is particularly confusing, as docker inspect cloudflare/cloudflared | jq '.[0].ContainerConfig.Cmd' refers to /bin/sh.
I can see two paths forward here:
Find a way to access /bin/sh (and associated tools; grep, xargs, etc.) from the cloudflare/cloudflared image
Find a way to update the tunnel's DNS records outside the context of the cloudflared tool (I suspect I could use this API, but using the cloudflared tool would be a lot neater)
Thanks to Cloudycelt for recommending that I build my own image to carry out this task as an initContainer. I've described the process here.
I'm leaving this question open in case there's a better option that I've missed. I've also opened an Issue on the cloudflared repo asking if this is a feature that should be added.
Related
I have a large file on my laptop (localhost). I would like to copy this file to a docker container which is located on a remote server. I know how to do it in two steps, i.e. I first copy the file to my remote server and then I copy the file from remote server to the docker container. But, for obvious reasons, I want to avoid this.
A similar question which has a complicated answer is covered here: Copy file from remote docker container
However in this question, the direction is reversed, the file is copied from the remote container to localhost.
Additional request: is it possible that this upload can be done piece-wise or that in case of a network failure I can resume the upload from where it stopped, instead of having to upload the entire file again? I ask because the file is fairly large, ~13GB.
From https://docs.docker.com/engine/reference/commandline/cp/#corner-cases and https://www.cyberciti.biz/faq/howto-use-tar-command-through-network-over-ssh-session/ you would just do:
tar Ccf $(dirname SRC_PATH) - $(basename SRC_PATH) | ssh you#host docker exec -i CONTAINER tar Cxf DEST_PATH -
or
tar Ccf $(dirname SRC_PATH) - $(basename SRC_PATH) | ssh you#host docker cp - CONTAINER:DEST_PATH
Or untested, no idea if this works:
DOCKER_HOST=ssh://you#host docker cp SRC_PATH CONTAINER:DEST_PATH
This will work if you are running a *nix server and a docker with ssh server in it.
You can create a local tunnel on the remote server by following these steps:
mkfifo host_to_docker
netcat -lkp your_public_port < host_to_docker | nc docker_ip_address 22 > host_to_docker &
First command will create a pipe that you can check with file host_to_docker.
Second one is the greatest network utility of all times that is netcat. It just accepts a tcp connection and forwards it to another netcat instance, receiving and forwarding underlying ssh messages to the ssh server running on docker and writing its responses to the pipe we created.
last step is:
scp -P your_public_port payload.tar.gz user#remote_host:/dest/folder
You can use the DOCKER_HOST environment variable and rsync to archive your goal.
First, you set DOCKER_HOST, which causes your docker client (i.e., the docker CLI util) to be connected to the remote server's docker daemon over SSH. This probably requires you to create an ssh-config entry for the destination server.
export DOCKER_HOST="ssh://<your-host-name>"
Next, you can use docker exec in conjunction with rsync to copy your data into the target container. This requires you to obtain the container ID via, e.g., docker ps. Note, that rsync must be installed in the container.
#
rsync -ar -e 'docker exec -i' <local-source-path> <container-id>:/<destintaion-in-the-container>
Since rsync is used, this will also allow you to resume (if the appropriated flags are used) uploads at some point later.
Setup
As per Ory Kratos Docker Documentation I run:
$ docker pull oryd/kratos:v0.7.1-alpha.1
$ docker run --rm -it oryd/kratos version
Version: v0.7.1-alpha.1
Build Commit: 4fe76af1302d45ddf4cf3c2c5949311c9cf1f8b8
Build Timestamp: 2021-07-22T17:41:40Z
Running the image in a container
What happens here is that no configuration file is specified, so it just errors out the keys that are required.
$ docker run oryd/kratos:v0.7.1-alpha.1
The configuration contains values or keys which are invalid:
identity: <nil>
^-- one or more required properties are missing
The configuration contains values or keys which are invalid:
selfservice.default_browser_return_url: <nil>
^-- one or more required properties are missing
The configuration contains values or keys which are invalid:
courier.smtp.connection_uri: <nil>
^-- one or more required properties are missing
time=2021-07-27T17:46:47Z level=fatal msg=Unable to instantiate configuration....
Issue
When using the Docker Images, Kratos does not recognize a configuration file with the --config flag.
Since containers are ran independently, I figured I'd have to use a file on the Daemon while running the serve command from the daemon and it seems Ory Kratos has a section for this also at Ory Kratos Docker Image)
docker run --rm -it oryd/kratos serve --config /home/ory/kratos.yml
FATA[2021-07-27T18:35:41Z] Unable to instantiate configuration. audience=application error=map[message:open /home/ory/kratos.yml: no such file or directory] service_name=Ory Kratos service_version=v0.7.1-alpha.1
Relevant Files:
The configuration
message:open /home/ory/kratos.yml: no such file or directory
You error above means the container can't find /home/ory/kratos.yml.
I figured I'd have to use a file on the Daemon
If I catch you correctly, you mean you put kratos.yml in the rootfs of docker host, but you did not put it inside container, this makes your container can't find the configuration file.
So, here you need to mount the file in host into container, something like next:
docker run --rm -v /home/ory/kratos.yml:/home/ory/kratos.yml -it oryd/kratos serve --config /home/ory/kratos.yml
You need to use the correct path of kratos.yml on host of course.
Detail refers to this.
I'm creating an application that will allow users to upload video files that will then be put through some processing.
I have two containers.
Nginx container that serves the website where users can upload their video files.
Video processing container that has FFmpeg and some other processing stuff installed.
What I want to achieve. I need container 1 to be able to run a bash script on container 2.
One possibility as far as I can see is to make them communicate over HTTP via an API. But then I would need to install a web server in container 2 and write an API which seems a bit overkill.
I just want to execute a bash script.
Any suggestions?
You have a few options, but the first 2 that come time mind are:
In container 1, install the Docker CLI and bind mount
/var/run/docker.sock (you need to specify the bind mount from the
host when you start the container). Then, inside the container, you
should be able to use docker commands against the bind mounted
socket as if you were executing them from the host (you might also
need to chmod the socket inside the container to allow a non-root
user to do this.
You could install SSHD on container 2, and then ssh in from container 1 and run your script. The advantage here is that you don't need to make any changes inside the containers to account for the fact that they are running in Docker and not bare metal. The down side is that you will need to add the SSHD setup to your Dockerfile or the startup scripts.
Most of the other ideas I can think of are just variants of option (2), with SSHD replaced by some other tool.
Also be aware that Docker networking is a little strange (at least on Mac hosts), so you need to make sure that the containers are using the same docker-network and are able to communicate over it.
Warning:
To be completely clear, do not use option 1 outside of a lab or very controlled dev environment. It is taking a secure socket that has full authority over the Docker runtime on the host, and granting unchecked access to it from a container. Doing that makes it trivially easy to break out of the Docker sandbox and compromise the host system. About the only place I would consider it acceptable is as part of a full stack integration test setup that will only be run adhoc by a developer. It's a hack that can be a useful shortcut in some very specific situations but the drawbacks cannot be overstated.
I wrote a python package especially for this use-case.
Flask-Shell2HTTP is a Flask-extension to convert a command line tool into a RESTful API with mere 5 lines of code.
Example Code:
from flask import Flask
from flask_executor import Executor
from flask_shell2http import Shell2HTTP
app = Flask(__name__)
executor = Executor(app)
shell2http = Shell2HTTP(app=app, executor=executor, base_url_prefix="/commands/")
shell2http.register_command(endpoint="saythis", command_name="echo")
shell2http.register_command(endpoint="run", command_name="./myscript")
can be called easily like,
$ curl -X POST -H 'Content-Type: application/json' -d '{"args": ["Hello", "World!"]}' http://localhost:4000/commands/saythis
You can use this to create RESTful micro-services that can execute pre-defined shell commands/scripts with dynamic arguments asynchronously and fetch result.
It supports file upload, callback fn, reactive programming and more. I recommend you to checkout the Examples.
Running a docker command from a container is not straightforward and not really a good idea (in my opinion), because :
You'll need to install docker on the container (and do docker in docker stuff)
You'll need to share the unix socket, which is not a good thing if you have no idea of what you're doing.
So, this leaves us two solutions :
Install ssh on you're container and execute the command through ssh
Share a volume and have a process that watch for something to trigger your batch
It was mentioned here before, but a reasonable, semi-hacky option is to install SSH in both containers and then use ssh to execute commands on the other container:
# install SSH, if you don't have it already
sudo apt install openssh-server
# start the ssh service
sudo service start ssh
# start the daemon
sudo /usr/sbin/sshd -D &
Assuming you don't want to always be root, you can add default user (in this case, 'foobob'):
useradd -m --no-log-init --system --uid 1000 foobob -s /bin/bash -g sudo -G root
#change password
echo 'foobob:foobob' | chpasswd
Do this on both the source and target containers. Now you can execute a command from container_1 to container_2.
# obtain container-id of target container using 'docker ps'
ssh foobob#<container-id> << "EOL"
echo 'hello bob from container 1' > message.txt
EOL
You can automate the password with ssh-agent, or you can use some bit of more hacky with sshpass (install it first using sudo apt install sshpass):
sshpass -p 'foobob' ssh foobob#<container-id>
I believe
docker exec -it <container_name> <command>
should work, even inside the container.
You could also try to mount to docker.sock in the container you try to execute the command from:
docker run -v /var/run/docker.sock:/var/run/docker.sock ...
$ docker run --rm -it busybox
/ # who
<empty>
In the next session I'm trying to attach to this docker container and expecting second user will appear, but no luck again:
$ docker attach `docker container ls | grep busybox | cut -d" " -f1`
/ # who
<empty again>
So the question is - why there are no logons happened not by first run-and-attach, not by consequent attaches? And why there is no even a single logon into this container?
who reads the list of users from /var/run/utmp. On a regular Linux system, the login program prompts for the username and password and then starts the user's shell. It also updates /var/run/utmp with the new user.
The same thing happens for SSH and Telnet servers. They are expected to update /var/run/utmp.
In a Docker container, login is usually not executed. Docker isolates resources from the host system with Linux Namespaces, it does not provide a complete Linux system. When you enter a Docker container, the given entrypoint or command is executed with PID 1.
Subsequent docker exec calls are handled in a similar way. Docker enters the namespace of the container and executes the given command.
EDIT: after some reading I see Alexander answer as more to the point. Couple of useful links I've read along that way:
https://docs.docker.com/engine/security/security/
https://lwn.net/Articles/531114
As far as I understand busybox docker container is very basic and does not support all functionality of full-fledged Linux.
Here I thought I understood Docker until I saw the BusyBox docker image there is a discussion about what that image is and what it is for.
I am trying to run Airflow Webserver on App Engine Flexible however for it to work I need a mounted GCS bucket. I am using custom runtime.
The reason why I am doing it is to get a secured endpoint that app Engine provides together with IAP.
My app.yaml is a simple file with service name, env and runtime
My Dockerfile is a lots of apt-get installs and in CMD there is gcsfuse mounting and running airflow webserver, it is not a big deal.
The error I am getting when trying to use gcsfuse in App Engine is:
daemonize.Run: readFromProcess: sub-process: mountWithArgs: mountWithConn: Mount: mount: running fusermount: exit status 1
stderr:
fusermount: fuse device not found, try 'modprobe fuse' first
I know that Google Composer exists but it is way too expensive for my needs. So I prefer to create a VM with a scheduler and webserver on GAE, sharing a GCS bucket, similar to what Composer gives but without all that HA and insane cost for simple things I want to run.
I am searching to do this in App Engine, all the answers I have found so far mention GKE for some reason.
I know it is a privilege problem, however in App Engine I do not see any option to set privileges, a way to do it would be very helpful.
Is is even possible to do what I want to do on App Engine?
This is possible. I'll show you how to do it manually, you might need to utilize shell script to deal with multiple instances.
define several vars used in this manual
service=YOUR_APPENGINE_VERSION
version=YOUR_APPENGINE_VERSION
project=PROJECTID
get instance list
gcloud app instances list --project $project
SERVICE VERSION ID VM_STATUS DEBUG_MODE
default *************** instance-id-1 RUNNING YES
default *************** instance-id-2 RUNNING
ssh into one instance
gcloud app instances ssh instance-id-1 --service $service --version $version --project $project
get image id
docker ps | grep gaeapp | awk '{print $2}'
you will get an imageid
get env of gaeapp
docker exec gaeapp env
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=*****
GAE_MEMORY_MB=614
GAE_INSTANCE=****
GAE_SERVICE=default
PORT=8080
GCLOUD_PROJECT=*****
GAE_VERSION=*****
GOOGLE_CLOUD_PROJECT=*****
restart gaeapp with privilege
docker rm -f gaeapp
docker run --privileged -d -p 8080:8080 --name gaeapp -e GAE_MEMORY_MB=614 -e GAE_INSTANCE=instance-id-1 -e GAE_SERVICE=$service -e PORT=8080 -e GCLOUD_PROJECT=$project -e GAE_VERSION=$version -e GOOGLE_CLOUD_PROJECT=$project $imageid
enter gaeapp(assume you have gcsfuse installed and have service account key json: /test-service-account.json)
$ docker exec -it gaeapp bash
[in gaeapp] # GOOGLE_APPLICATION_CREDENTIALS=/test-service-account.json gcsfuse BUCKET /mnt/
Using mount point: /mnt
Opening GCS connection...
Opening bucket...
Mounting file system...
File system has been successfully mounted.
To be honest, I have tried all possible solutions. and finally the above solution worked. Unfortunately, it worked for 2-3 days only. After sometime, App Engine restarts the instances automatically, without any failure in app. Therefore all changes for gcsfuse got disappeared.
Main thing for gcsfuse to work in container is to run the docker image in priviliged mode. And App Engine doesnot allow that
The final solution that we are using is GKE which is working fine.
Note: It was expected that GAE should have some provision for privileged mode, but it doesnot have now. In future Google Team may introduce it. Thanks!