I have deployed a CoreOS standealone server with VMware image follow this guide to experience CoreOS.
After deploy success, I found that my CoreOS only enable Docker service, without etcd and fleet service running. Although I know how to use systemd to run etcd and fleet service manually. And I also know use a proper cloud-config can install CoreOS in which etcd and fleet service start automatically.
But I want to know that:
Is it possible to place a unit file in /etc/systemd/system to make systemd starts etcd service automatically?
If can, what is the content of the unit file?
If cannot, what is the other way?
Thanks
Yes. You must have a an etcd.service and fleet.service with an Install section. I've added WantedBy=default.target in mine.
They are already placed on coreos systems within /usr/lib64/systemd/system/. You can copy them to /etc/systemd/system/:
$ cp /usr/lib64/systemd/system/etcd.service /etc/systemd/system/
$ cp /usr/lib64/systemd/system/fleet.service /etc/systemd/system/
$ echo -e '[Install]\nWantedBy=default.target >> /etc/systemd/system/fleet.service
$ echo -e '[Install]\nWantedBy=default.target >> /etc/systemd/system/etcd.service
$ systemctl enable etcd.service
$ systemctl enable fleet.service
I'll also give you the general warning here that I have no idea what changes to /etc/systemd/ do in the long run, given CoreOSs upgrade system. An upgrade could wipe out /etc/systemd/ leaving you in a confused state as to what happened to your customized systemd scripts not managed by cloud-init.
The proper way to do this is with cloud-config. Specifically for VMware, you'll need to serve the cloud-config via config-drive as documented.
It's kind of a pain, but it'll work.
Related
I read the Enable Live Restore, but when I tried it.
ubuntu#ip-10-0-0-230:~$ cat /etc/docker/daemon.json
{
"live-restore": true
}
I started an nginx container in detached mode.
sudo docker run -d nginx
c73a20d1bb620e2180bc1fad7d10acb402c89fed9846f06471d6ef5860f76fb5
$sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS
c73a20d1bb62 nginx "nginx -g 'daemon of…" 5 seconds ago Up 4
seconds
Then I stopped the dockerd
sudo systemctl stop snap.docker.dockerd.service
and I checked that there was no container running
ps aux | grep nginx
After that, I restarted the docker service and still, there wasn't any container.
Any Idea? How this "enable live restore" works?
From the documentation, after modifying the daemon.json (adding "live-restore": true) you need to :
Restart the Docker daemon. On Linux, you can avoid a restart (and avoid any downtime for your containers) by reloading the Docker daemon. If you use systemd, then use the command systemctl reload docker. Otherwise, send a SIGHUP signal to the dockerd process.
You can also do this but it's not recommended :
If you prefer, you can start the dockerd process manually with the --live-restore flag. This approach is not recommended because it does not set up the environment that systemd or another process manager would use when starting the Docker process. This can cause unexpected behavior.
It seems that you had not done this step. You said that you've made the modification to the daemon.json and directly started a container and then stopped the dockerd.
In order to make the Live Restore functionality work follow all steps in the right order :
Modify the daemon.json by adding "live-restore": true
Reload the Docker daemon with the command :
sudo systemctl reload docker
Then try the functionality with your example (firing up a container and making the daemon unavailable).
I've tested and it works if you follow the steps in order :
Tested with Docker version 19.03.2, build 6a30dfc and Ubuntu 19.10 (Eoan Ermine)
You've installed Docker via snap : snap.docker.dockerd.service
Unfortunately, it's not recommended since snap model is not fully compatible with Docker. Furthermore, docker-snap is no longer maintained by Docker, Inc. Users encounters some issues when they installed Docker via snap see 1 2
You should delete the snap Docker installation to avoid any potential overlapping installation issues via this command :
sudo snap remove docker --purge
Then install Docker with the official way and after that try the Live Restore functionality by following the above steps.
Also be careful when restarting the daemon the documentation says that :
Live restore upon restart
The live restore option only works to restore containers if the daemon options, such as bridge IP addresses and graph driver, did not change. If any of these daemon-level configuration options have changed, the live restore may not work and you may need to manually stop the containers.
Also about downtime :
Impact of live restore on running containers
If the daemon is down for a long time, running containers may fill up the FIFO log the daemon normally reads. A full log blocks containers from logging more data. The default buffer size is 64K. If the buffers fill, you must restart the Docker daemon to flush them.
On Linux, you can modify the kernel’s buffer size by changing /proc/sys/fs/pipe-max-size.
I'm creating an application that will allow users to upload video files that will then be put through some processing.
I have two containers.
Nginx container that serves the website where users can upload their video files.
Video processing container that has FFmpeg and some other processing stuff installed.
What I want to achieve. I need container 1 to be able to run a bash script on container 2.
One possibility as far as I can see is to make them communicate over HTTP via an API. But then I would need to install a web server in container 2 and write an API which seems a bit overkill.
I just want to execute a bash script.
Any suggestions?
You have a few options, but the first 2 that come time mind are:
In container 1, install the Docker CLI and bind mount
/var/run/docker.sock (you need to specify the bind mount from the
host when you start the container). Then, inside the container, you
should be able to use docker commands against the bind mounted
socket as if you were executing them from the host (you might also
need to chmod the socket inside the container to allow a non-root
user to do this.
You could install SSHD on container 2, and then ssh in from container 1 and run your script. The advantage here is that you don't need to make any changes inside the containers to account for the fact that they are running in Docker and not bare metal. The down side is that you will need to add the SSHD setup to your Dockerfile or the startup scripts.
Most of the other ideas I can think of are just variants of option (2), with SSHD replaced by some other tool.
Also be aware that Docker networking is a little strange (at least on Mac hosts), so you need to make sure that the containers are using the same docker-network and are able to communicate over it.
Warning:
To be completely clear, do not use option 1 outside of a lab or very controlled dev environment. It is taking a secure socket that has full authority over the Docker runtime on the host, and granting unchecked access to it from a container. Doing that makes it trivially easy to break out of the Docker sandbox and compromise the host system. About the only place I would consider it acceptable is as part of a full stack integration test setup that will only be run adhoc by a developer. It's a hack that can be a useful shortcut in some very specific situations but the drawbacks cannot be overstated.
I wrote a python package especially for this use-case.
Flask-Shell2HTTP is a Flask-extension to convert a command line tool into a RESTful API with mere 5 lines of code.
Example Code:
from flask import Flask
from flask_executor import Executor
from flask_shell2http import Shell2HTTP
app = Flask(__name__)
executor = Executor(app)
shell2http = Shell2HTTP(app=app, executor=executor, base_url_prefix="/commands/")
shell2http.register_command(endpoint="saythis", command_name="echo")
shell2http.register_command(endpoint="run", command_name="./myscript")
can be called easily like,
$ curl -X POST -H 'Content-Type: application/json' -d '{"args": ["Hello", "World!"]}' http://localhost:4000/commands/saythis
You can use this to create RESTful micro-services that can execute pre-defined shell commands/scripts with dynamic arguments asynchronously and fetch result.
It supports file upload, callback fn, reactive programming and more. I recommend you to checkout the Examples.
Running a docker command from a container is not straightforward and not really a good idea (in my opinion), because :
You'll need to install docker on the container (and do docker in docker stuff)
You'll need to share the unix socket, which is not a good thing if you have no idea of what you're doing.
So, this leaves us two solutions :
Install ssh on you're container and execute the command through ssh
Share a volume and have a process that watch for something to trigger your batch
It was mentioned here before, but a reasonable, semi-hacky option is to install SSH in both containers and then use ssh to execute commands on the other container:
# install SSH, if you don't have it already
sudo apt install openssh-server
# start the ssh service
sudo service start ssh
# start the daemon
sudo /usr/sbin/sshd -D &
Assuming you don't want to always be root, you can add default user (in this case, 'foobob'):
useradd -m --no-log-init --system --uid 1000 foobob -s /bin/bash -g sudo -G root
#change password
echo 'foobob:foobob' | chpasswd
Do this on both the source and target containers. Now you can execute a command from container_1 to container_2.
# obtain container-id of target container using 'docker ps'
ssh foobob#<container-id> << "EOL"
echo 'hello bob from container 1' > message.txt
EOL
You can automate the password with ssh-agent, or you can use some bit of more hacky with sshpass (install it first using sudo apt install sshpass):
sshpass -p 'foobob' ssh foobob#<container-id>
I believe
docker exec -it <container_name> <command>
should work, even inside the container.
You could also try to mount to docker.sock in the container you try to execute the command from:
docker run -v /var/run/docker.sock:/var/run/docker.sock ...
Is there any way to run Firecracker inside Docker container.
I tried the basic networking in firecracker although having containerized firecracker can have many benefits
No hurdle to create and manage overlay network and attach
Deploy in Docker swarm and in Kubernetes
No need to clean IPTables/Network rules
etc.
You can use kata-containers to simplify
https://github.com/kata-containers/documentation/wiki/Initial-release-of-Kata-Containers-with-Firecracker-support
I came up with something very basic as this:
https://github.com/s8sg/docker-firecracker
It allows creating go application that can run inside containerized firecracker
Setup Tutorial
You find a good tutorial with all the basics at the Weaveworks
fire-up-your-vms-with-weave-ignite
it introduces
weaveworks ignite (Github)
Ignite works like a One-to-One replacement for "docker", and it does work on my Raspberry PI 4, with Debian11 too.
How to use
Create and start a VM
$ sudo ignite run weaveworks/ignite-ubuntu \
--cpus 1 \
--memory 1GB \
--ssh \
--name my-vm1
Show your VM Processes
$ ignite ps
Login into your running VM
$ sudo ignite ssh my-vm1
It takes a couple of sec to start (manualy) a new VM on my Raspberry PI 4 (8Gb, 64bit Debian11):
Login into any of these
$ sudo ignite ssh my-vm3
Footloose
If you add footloose you can start up a cluster of MicroVMs, which allows additional scenarios. It works more less like docker-swarm with VMs. Footloose reads a description of the Cluster of Machines to create from a file, by default named footloose.yaml. Please check
footloose vm cluster (Github)
Note: be aware of a Apache ignite, which is a solution for something else, and don't get confused by it.
The initial idea for this answer is from my gist here
The docker daemon isn't starting anymore on my computer (Linux / Centos 7), and I strongly suspect that a container that is set to auto-restart is to blame in this case. If I start the daemon manually, the last line I see is "Loading containers: start", and then it just hangs.
What I'd like to do is to start the daemon without starting any containers. But I can't find any option to do that. Is there any option in docker to start the daemon without also starting containers set to automatically restart? If not, is there a way to remove the containers manually that doesn't require the docker daemon running?
I wrote this little script to stop all the containers before docker is started. It requires to have jq installed.
for i in /var/lib/docker/containers/*/config.v2.json; do
touch "$i.new" && getfacl -p "$i" | setfacl --set-file=- "$i.new"
cat "$i" | jq -c '.State.Running = false' > "$i.new" && mv -f "$i.new" "$i"
done
I think we need to verify the storage driver for docker that you are using. Devicemapper is known to have some issues similar to what you are describing. I would suggest moving to overlay2 as a storage driver.
If you are not running this on a prod system, you can try to do below steps to see if the daemon is coming up or not,
Stop the daemon process
Clean the docker home directory, default is /var/lib/docker/*
You may not be able to remove everything, in that case safe bet is to stop docker from autostart ,systemctl disable docker and restart the system
Once system is up, execute step-2 again and try to restart the daemon. Hopefully everything will come up.
Like most docker users, I periodically need to connect to a running container and execute various arbitrary commands via bash.
I'm using 17.06-CE with an ubuntu 16.04 image, and as far as I understand, the only way to do this without installing ssh into the container is via docker exec -it <container_name> bash
However, as is well-documented, for each bash shell process you generate, you leave a zombie process behind when your connection is interrupted. If you connect to your container often, you end up with 1000s of idle shells -a most undesirable outcome!
How can I ensure these zombie shell processes are killed upon disconnection -as they would be over ssh?
One way is to make sure the linux init process runs in your container.
In recent versions of docker there is an --init option to docker run that should do this. This uses tini to run init which can also be used in previous versions.
Another option is something like the phusion-baseimage project that provides a base docker image with this capability and many others (might be overkill).