Install and create a Kubernetes cluster on lxc proxmox - docker

I would like to know who tried to install and create a Kubernetes cluster inside LXC proxmox.
Which steps should I follow to realize that?

You can use below articles to get the desired result:
Run kubernetes inside LXC container or Run Kubernetes as Proxmox container
To summarize above articles you should perform next steps:
1) Add overlay driver for docker.
echo overlay >> /etc/modules
2) Add more privileges to container by modifying container config
lxc.apparmor.profile: unconfined
lxc.cap.drop:
lxc.cgroup.devices.allow: a
lxc.mount.auto: proc:rw sys:rw
3) Use shared filesystem to /etc/rc.local
echo '#!/bin/sh -e
mount --make-rshared /' > /etc/rc.local
4) Init cluster using kubeadm

Related

running an image created through Docker in lxc

I want to run an image which I have already created and uploaded on the docker hub. Is it possible to run that image on lxc/lxd? Basically I want to do performance comparison between docker and lxc.
I have installed skopeo, umoci, go-md2man and jq.
Now, when I try to run the command lxc-create c1 -t oci – --url docker://awaisaz/test:part2
it gives trust policy error. /etc/containers/policy.json not such file or directory
Can anyone suggest me a solution or alternate way to do this?
So, you want to run a docker container inside an LXC Container.
firstly, you need to make docker process up and running inside an lxc container.
sudo lxc config edit <lxc-container-name>
In Config Object, Add
linux.kernel_modules: overlay,ip_tables
security.nesting: true
security.privileged: true
Then Exit from that YAML File, And Restart the LXC Container
sudo lxc restart <container_name>
After Successfull restart of LXC Container.
exec into that container by
sudo lxc exec <container_name> /bin/bash
Then,
sudo rm /var/lib/docker/network/files/local-kv.db
Restart Docker Service,
service docker restart (In LXC Container)
Then you can use docker process in LXC Container as if you are in a VM.

How to add known_hosts for passwordless ssh within docker container in docker-compse.yml?

I want to have passwordless ssh within two docker containers. How to add known_hosts entry for that using docker-compose.yml file
I want to implement ansible on docker env. To deploy and run rpm on deployment node, I need passwordless ssh from container1 to container2. For that I have to add known_hosts key of container1 in container2 node.
How to do this ???
I don't know any solution using docker-compose.yml. The solution I propose implies create a Dockerfile and execute (creating a shellscript as CMD):
ssh-keyscan -t rsa whateverdomain >> ~/.ssh/known_hosts
Maybe you can scan /ect/hosts or pass a variable as ENV.
try to mount it from host to container. .
--volume local/path/to/known_hosts:/etc/ssh/ssh_known_hosts
in case it didnt work, take a look at some similar case related to ssh key in docker like : this
and this

How to mount docker volume with jenkins docker container?

I have jenkins running inside container and project source code on github.
I need to run project in container on the same host as jenkins, but not as docker-in-docker, i want to run them as sibling containers.
My pipeline looks like this:
pull the source from github
build the project image
run the project container
What i do right now is using the docker socket of host from jenkins container:
/var/run/docker.sock:/var/run/docker.sock
I have problem when jenkins container mount the volume with source code from /var/jenkins_home/workspace/BRANCH_NAME to project container:
volumes:
- ./servers/identity/app:/srv/app
i am getting empty folder "/srv/app" in project container
My best guess is that docker tries to mount it from host and not from the jenkins container.
So the question is: how can i explicitly set the container from which i mount the volume?
I got the same issue when using Jenkins docker container to run another container.
Senario 1 - Running container inside Jenkins docker container
This is not a recommended way, explanations goes here. If you still need to use this approach, then this problem is not a problem.
Senario 2 - Running docker client inside Jenkins container
Suppose, we need to run another container (ContainerA) inside Jenkins docker container, docker pipeline plugin will use --volumes-from to mount Jenkins container volume to ContainerA.
If you trying to use --volume or -v to map specific directory in Jenkins container to ContainerA, you will got an unexpected behavior.
That's because --volumes or -v would try to map directories in host to ContainerA, rather than mapping from directories inside Jenkins container. If the directories not found in host, then you will get an empty dir inside ContainerA.
In short, we can not map a specific directory from containerA to containerB, we could only mount the whole volumes from containerA to containerB, and volume alias is not supported.
Solution
If your Jenkins is running with host volume, you can map the host directories to the target container.
Otherwise, you can access the files inside the newly created container with the same location as Jenkins container.
try:
docker run -d --volumes-from <ContainerID> <YourImage>
where container ID is id of container you want for mont data from.
You can also create volume, by:
docker volume create <volname>
and assign it to both containers
volumes:
- <volname>:/srv/app
Sharing the sock between the Host and Jenkins was my problem because "/var/jenkins_home" is most likely a volume for the Jenkins container.
My solution was installing docker inside a systemd container without sharing the sock.
docker run -d --name jenkins \
--restart=unless-stopped \
--privileged \
-v /sys/fs/cgroup:/sys/fs/cgroup:ro \
-v jenkins-vol:/var/lib/jenkins \
--tmpfs /run \
--tmpfs /run/lock \
ubuntu:16.04 /sbin/init
Then install Jenkins, Docker and Docker Compose on it.

Adding additional docker node to Shipyard

I have installed Shipyard following the automatic procedure on their website. This works and I can access the UI. It's available on 172.31.0.179:8080. From the UI, I see a container called 'shipyard-discovery' which is exposing 172.31.0.179:4001.
I'm now trying to add an additional node to Shipyard. For that I use Docker Machine to install an additional host and on that host I'm using the following command to add the node to Shipyard.
curl -sSL https://shipyard-project.com/deploy | ACTION=node DISCOVERY=etcd://173.31.0.179:4001 bash -s
This additional node is not added to the Swarm cluster and is not visible in the Shipyard UI. On that second host I get the following output
-> Starting Swarm Agent
Node added to Swarm: 172.31.2.237
This indicated that indeed the node is not added to the Swarm cluster as I was expecting sth like: Node added to Swarm: 172.31.0.179
Any idea on why the node is not added to the Swarm cluster?
Following the documentation for manual deployment you can add a Swarm Agent writing it's host IP:
docker run \
-ti \
-d \
--restart=always \
--name shipyard-swarm-agent \
swarm:latest \
join --addr [NEW-NODE-HOST-IP]:2375 etcd://[IP-HOST-DISCOVERY]:4001
I've just managed to make shipyard see the nodes in my cluster, you have to follow the instructions in Node Installation, by creating a bash file that does the deploy for you with the discovery IP set up.

Is it possible to restart docker container from inside it

I'd like to package Selenium grid exrtas into a docker image.
This service being run without using docker container can reboot the OS it's running in. I wonder if I can setup the container to restart by Selemiun grid extras service running inside the container.
I am not familiar with Selenium Grid, but as a general idea: you could mount a folder from the host as data volume, then let Selenium write information to there, like a flag file.
On the host, you have a scheduled task / cronjob running on the host that would check for this flag in the shared folder and if it has a certain status, you would invoke a docker restart from there.
Not sure if there are other more elegant solutions for this, but this is what came to my mind adhoc.
Update:
I just found this on the Docker forum:
https://forums.docker.com/t/how-can-i-run-docker-command-inside-a-docker-container/337
I'm not sure about CoreOS but normally you can manage your host
containers from within a container by mounting the Docker socket.
Such as
docker run -it -v /var/run/docker.sock:/var/run/docker.sock ubuntu:latest sh -c "apt-get update ; apt-get install docker.io -y ;
bash"
or
https://registry.hub.docker.com/u/abh1nav/dockerui/

Resources