Is there any way to run Firecracker inside Docker container.
I tried the basic networking in firecracker although having containerized firecracker can have many benefits
No hurdle to create and manage overlay network and attach
Deploy in Docker swarm and in Kubernetes
No need to clean IPTables/Network rules
etc.
You can use kata-containers to simplify
https://github.com/kata-containers/documentation/wiki/Initial-release-of-Kata-Containers-with-Firecracker-support
I came up with something very basic as this:
https://github.com/s8sg/docker-firecracker
It allows creating go application that can run inside containerized firecracker
Setup Tutorial
You find a good tutorial with all the basics at the Weaveworks
fire-up-your-vms-with-weave-ignite
it introduces
weaveworks ignite (Github)
Ignite works like a One-to-One replacement for "docker", and it does work on my Raspberry PI 4, with Debian11 too.
How to use
Create and start a VM
$ sudo ignite run weaveworks/ignite-ubuntu \
--cpus 1 \
--memory 1GB \
--ssh \
--name my-vm1
Show your VM Processes
$ ignite ps
Login into your running VM
$ sudo ignite ssh my-vm1
It takes a couple of sec to start (manualy) a new VM on my Raspberry PI 4 (8Gb, 64bit Debian11):
Login into any of these
$ sudo ignite ssh my-vm3
Footloose
If you add footloose you can start up a cluster of MicroVMs, which allows additional scenarios. It works more less like docker-swarm with VMs. Footloose reads a description of the Cluster of Machines to create from a file, by default named footloose.yaml. Please check
footloose vm cluster (Github)
Note: be aware of a Apache ignite, which is a solution for something else, and don't get confused by it.
The initial idea for this answer is from my gist here
Related
I'm trying to create a CI/CD infrastructure using Jenkins. Considering recoverability, performance and maintainability topics I decided to handle both Jenkins and agents as Docker containers.
There are some certain restrictions that I cannot workaround:
Cannot build this setup on Linux environment (IT policy)
Cannot use WSL2 on Windows (I don't know when IT department will release regarding Windows update that supports WSL2)
Security is a very high priorty topic
As far as I see, Docker outside of Docker setup is the proper way to implement. If I run the container as root using the command below, I can bind the docker.sock file and Jenkins jobs can create containers from Dockerfiles as agents:
docker run --name dood `
-d -u root --restart on-failure `
-p "8080:8080" -p "50000:50000" `
-v //var/run/docker.sock:/var/run/docker.sock `
-v /usr/local/bin/docker:/usr/bin/docker `
jenkins/jenkins:lts
However, it doesn't work if Jenkins container is run with non-root user. This is not acceptable as it creates vulnerability. Suggested way is to run the container without root user and assign "jenkins" user to "docker" group:
groupadd docker
usermod -a -G docker jenkins
newgrp docker
Unfortunately, it doesn't work. "Got permission denied..." error occurs when Jenkins jobs try to create agent containers. I restarted Docker Desktop and container but result is the same. I am not sure but possible reason might be the Windows environment. This may work in Linux environment.
As a final effort, I tried the solution that is described in a stackoverflow topic. I noticed "setfacl" command does not work when Docker runs with Hyper-V. If I switch to WSL2 on my demo PC then the commands below solve the problem:
gpasswd -a jenkins docker
apt-get install acl
setfacl -m user:jenkins:rw /var/run/docker.sock
Unfortunately, target Windows environment does not support WSL2 so I cannot use this solution. Moreover, setfacl command is not persistent but this is another story.
An alternative solution might be activating "Expose daemon on tcp://localhost:2375 without TLS" option. However, this is not acceptable in security point of view so I cross it out.
I am curious if it is even possible to implement Docker outside of Docker setup for Jenkins on Docker Desktop for Windows. Considering the named restrictions, I am open to alternative setups/solutions as well.
I am quite new to Docker and not very experienced with Jenkins. If I use wrong terminology or approach please let me know.
I am trying to connect to docker containers using the default SSHManager.
These containers only have a running sshd, with public key authentication, and julia installed.
Here is my dockerfile:
FROM rastasheep/ubuntu-sshd
RUN apt-get update && apt-get install -y julia
RUN mkdir -p /root/.ssh
ADD id_rsa.pub /root/.ssh/authorized_keys
I am running the container using:
sudo docker run -d -p 3333:22 -it --name julia-sshd julia-sshd
And then in the host machine, using the julia repl, I get the following error:
julia> import Base:SSHManager
julia> addprocs(["root#localhost:3333"])
stdin: is not a tty
Worker 2 terminated.
ERROR (unhandled task failure): EOFError: read end of file
Master process (id 1) could not connect within 60.0 seconds.
exiting.
I have tested that I can connect to the container via ssh without password.
I have also tested that in julia repl I can add a regular machine with julia installed to the cluster and it works fine.
But I cannot get this two things working together. Any help or suggestions will be apreciated.
I recommend you to also deploy the Master in a Docker container. It makes your environment easily and fully reproducible.
I'm working on a way of deploying Workers in Docker containers on-demand. i.e., the Master deployed in a container can deploy further DockerizedJuliaWorkers. It is similar to https://github.com/gsd-ufal/Infra.jl but assuming that Master and Workers run on the same host, which makes things not so hard.
It is an on-going work and I plan to finish next weeks. In a nutshell:
1) You'll need a simple DockerBackend and a wrapper to transparently run containers, set up SSH, and call addprocs with all the low-level parameters (i.e., the DockerizedJuliaWorker.jl file):
https://github.com/NaelsonDouglas/DistributedMachineLearningThesis/tree/master/src/docker
2) Read here how to build the Docker image (Dockerfile is included):
https://github.com/NaelsonDouglas/DistributedMachineLearningThesis
Please tell me if you have any suggestion on how to improve it.
Best,
André Lage.
I'm using a Mac, but I want to learn and use Ubuntu for development and I don't care about the GUI. I used to use Vagrant and ssh to the machine, but it consumes much of my machine resources. Can I use docker for the same purpose while also having the isolation (when I mess things up) of a VM?
First install Docker Desktop for Mac.
Then in a terminal window run: docker run -it --name ubuntu ubuntu:xenial bash
You are in a terminal with ubuntu and can do whatever you like.
Note: If you are using an ubuntu version bionic (18.04) or newer (ubuntu:bionic or ubuntu:latest), you
must run the command unminimize inside the container so the tools
for human interaction be installed.
To start again after a reboot:
docker start ubuntu
docker exec -it ubuntu bash
If you want save your changes:
docker commit ubuntu
docker images
See the unnamed image and:
docker tag <imageid> myubuntu
Then you can run another container using your new image.
docker run -it --name myubuntu myubuntu bash
Or replace the former
docker stop ubuntu
docker rm ubuntu
docker run -it --name ubuntu myubuntu bash
Hope it helps
This is one of the few scenarios I wouldn't use Docker for :)
Base images like Ubuntu are heavily stripped down versions of the full OS. The latest Ubuntu image doesn't have basic tools like ping and curl - that's a deliberate strategy from Canonical to minimise the size of the image, and therefore the attack vector. Typically you'd build an image to run a single app process in a container, you wouldn't SSH in and use ordinary dev tools, so they're not needed. That will make it hard for you to learn Ubuntu, because a lot of the core stuff isn't there.
On the Mac, the best VM tool I've used is Parallels - it manages to share CPU without hammering the battery. VirtualBox is good too, and for either of them you can install full Ubuntu Server from the ISO - 5GB disk and 1GB RAM allocation will be plenty if you're just looking around.
With any hypervisor you can pause VMs so they stop using resources, and checkpoint them to save the image so you can restore back to it later.
Yes, you can.
Try searching docker hub for ubuntu containers of your choice (version and who is supporting the image)
Most of them are very well documented on what was used to build it and also how to run and access/expose resources if needed.
Check the official one here: https://hub.docker.com/_/ubuntu/
I have deployed a CoreOS standealone server with VMware image follow this guide to experience CoreOS.
After deploy success, I found that my CoreOS only enable Docker service, without etcd and fleet service running. Although I know how to use systemd to run etcd and fleet service manually. And I also know use a proper cloud-config can install CoreOS in which etcd and fleet service start automatically.
But I want to know that:
Is it possible to place a unit file in /etc/systemd/system to make systemd starts etcd service automatically?
If can, what is the content of the unit file?
If cannot, what is the other way?
Thanks
Yes. You must have a an etcd.service and fleet.service with an Install section. I've added WantedBy=default.target in mine.
They are already placed on coreos systems within /usr/lib64/systemd/system/. You can copy them to /etc/systemd/system/:
$ cp /usr/lib64/systemd/system/etcd.service /etc/systemd/system/
$ cp /usr/lib64/systemd/system/fleet.service /etc/systemd/system/
$ echo -e '[Install]\nWantedBy=default.target >> /etc/systemd/system/fleet.service
$ echo -e '[Install]\nWantedBy=default.target >> /etc/systemd/system/etcd.service
$ systemctl enable etcd.service
$ systemctl enable fleet.service
I'll also give you the general warning here that I have no idea what changes to /etc/systemd/ do in the long run, given CoreOSs upgrade system. An upgrade could wipe out /etc/systemd/ leaving you in a confused state as to what happened to your customized systemd scripts not managed by cloud-init.
The proper way to do this is with cloud-config. Specifically for VMware, you'll need to serve the cloud-config via config-drive as documented.
It's kind of a pain, but it'll work.
According to this github issue it should be possible to start a full container with Upstart, cron etc. with Docker 0.6 or later but how do I do that?
I was expecting that
docker run -t -i ubuntu /sbin/init
would work just like
lxc-start -n ubuntu /sbin/init
and I would get a login screen, but instead it displays nothing. I also tried to access it using ssh, but no luck. I'm using the default ubuntu image from Docker index.
docker run ubuntu /sbin/init appears to work flawlessly for me with 0.6.6. You won't get a login screen because Docker only manages the process. Instead, you can use docker ps -notrunc to get the full lxc container ID and then use lxc-attach -n <container_id> run bash in that container as root. sshd isn't installed in the container, so you can't ssh to it.
You can use the ubuntu-upstart image:
docker run -t -i ubuntu-upstart:14.04 /sbin/init
Although this solution is unfortunately deprecated, it is good enough if you need a full OS container that 'drives' like a normal Ubuntu 12.04, 14.04 or 14.10 (change the :14.04 bit) system today. If no version is specified it defaults to 14.04. I have not used it heavily, and had some issues installing more complicated packages (e.g. dbus!), but it might work for you.
Alas Ubuntu has switched to systemd in more recent releases. Googling reveals that there seems to be ongoing work to make systemd work in a docker container without requiring elevated privileges, but it does not seem to be quite ready for prime-time. Hopefully it will be ready when 16.04 becomes LTS.
Another option is of course to use phusion/baseimage, but it has it's own approach for starting services. Seems better suited to minimal multi-process containers.