I have a Kubernetes v1.8.6 cluster on google cloud platform.
my desktop is a macbook pro with high sierra, kubectl installed using the google-cloud-sdk an docker is installed as a vm using homebrew.
I installed php docker image using the following kubernetes deployment yaml file:
apiVersion: apps/v1beta2 # for versions before 1.8.0 use apps/v1beta1
kind: Deployment
metadata:
name: php-deployment
labels:
app: php
spec:
replicas: 1
selector:
matchLabels:
app: php
template:
metadata:
labels:
app: php
spec:
containers:
- name: php
image: php:7.1.13-apache-jessie
volumeMounts:
- mountPath: /var/www/html
name: httpd-storage
- mountPath: /etc/apache2
name: httpd-conf-storage
- mountPath: /usr/local/etc/php
name: php-storage
ports:
- containerPort: 443
- containerPort: 80
volumes:
- name: httpd-storage
gcePersistentDisk:
fsType: ext4
pdName: httpd-disk
- name: httpd-conf-storage
gcePersistentDisk:
fsType: ext4
pdName: httpd-conf-disk
- name: php-storage
gcePersistentDisk:
fsType: ext4
pdName: php-disk
I installed it with kubectl create -f yaml.file
it works.. so far so good.
now I want to extend this image to install CertBot on it.
so I created the following Dockerfile:
FROM php:7.1.13-apache-jessie
RUN bash -c 'echo deb http://ftp.debian.org/debian jessie-backports main >> /etc/apt/sources.list'
RUN apt-get update
RUN apt-get install -y python-certbot-apache -t jessie-backports
I placed this file in directory called build and built an image out of the dockerfile using docker build -t tuxin-php ./build.
I have no idea where the docker image is placed because docker is running out of a VM in high sierra and I'm a bit confused if I have local access or need to do scp, but it may not needed.
is there a way to directly install the Dockerfile that I created?
do I have to create the image and somehow to install it? and if so how ?
I'm a bit confused so any information regarding the issue would be greatly appreciated.
thank you
First of all, you need to build your docker image. Then you need to push your image into a docker registry. Why? So that your pod can pull that image from that registry. Building image is not enough.
Now where should you keep that docker image. You can try https://hub.docker.com/.
You can follow these steps:
Create a account in https://hub.docker.com/
Configure your machine to use your docker registry account.
Use these command
$ docker login
Login with your Docker ID to push and pull images from Docker Hub. If you don't have a Docker ID, head over to https://hub.docker.com to create one.
Username: <your docker hub username>
Password: <your docker hub password>
Now you are ready to push your docker image into your registry.
In this case you want to push your image named tuxin-php. So you need to create a repository in docker hub (https://hub.docker.com/) using same name tuxin-php. (https://docs.docker.com/docker-hub/repos/)
Try this now
$ docker build -t xxxx/tuxin-php ./build
$ docker push xxxx/tuxin-php
Here, xxxx is your docker username.
When you are pushing xxxx/tuxin-php, your image will be stored in tuxin-php repository under your username.
And finally, you have to use this image.
containers:
- name: php
image: xxxx/tuxin-php
Your pod will pull xxxx/tuxin-php from docker hub.
Hope this will help
Related
I have a pod running Linux, I have let others use it. Now I need to save the changes made by others. Since sometimes I need to delete/restart the pod, the changes are reverted and new pod get created. So I want to save the pod container as docker image and use that image to create a pod.
I have tried kubectl debug node/pool-89899hhdyhd-bygy -it --image=ubuntu then install docker, dockerd inside but they don't have root permission to perform operations, installed crictl they where listing the containers but they don't have options to save them.
Also created a privileged docker image, created a pod from it, then used the command kubectl exec --stdin --tty app-7ff786bc77-d5dhg -- /bin/sh then tried to get running container, but it was not listing the containers. Below is the deployment i used to the privileged docker container
kind: Deployment
apiVersion: apps/v1
metadata:
name: app
labels:
app: backend-app
backend-app: app
spec:
replicas: 1
selector:
matchLabels:
app: backend-app
task: app
template:
metadata:
labels:
app: backend-app
task: app
spec:
nodeSelector:
kubernetes.io/hostname: pool-58i9au7bq-mgs6d
volumes:
- name: task-pv-storage
hostPath:
path: /run/docker.sock
type: Socket
containers:
- name: app
image: registry.digitalocean.com/my_registry/docker_app#sha256:b95016bd9653631277455466b2f60f5dc027f0963633881b5d9b9e2304c57098
ports:
- containerPort: 80
volumeMounts:
- name: task-pv-storage
mountPath: /var/run/docker.sock
Is there any way I can achieve this, get the pod container and save it as a docker image? I am using digitalocean to run my kubernetes apps, I do not ssh access to the node.
This is not a feature of Kubernetes or CRI. Docker does support snapshotting a running container to an image however Kubernetes no longer supports Docker.
Thank you all for your help and suggestions. I found a way to achieve it using the tool nerdctl - https://github.com/containerd/nerdctl.
I have a container that I need to configure for k8s yaml. The workflow on docker run using the terminal looks like this.:
docker run -v $(pwd):/projects \
-w /projects \
gcr.io/base-project/myoh:v1 init *myproject*
This command creates a directory called myproject. To complete the workflow, I need to cd into this myproject folder and run:
docker run -v $(pwd):/project \
-w /project \
-p 8081:8081 \
gcr.io/base-project/myoh:v1
Any idea how to convert this to either a docker-compose or a k8s pods/deployment yaml? I have tried all that come to mind with no success.
The bind mount of the current directory can't be translated to Kubernetes at all. There's no way to connect a pod's filesystem back to your local workstation. A standard Kubernetes setup has a multi-node installation, and if it's possible to directly connect to a node (it may not be) you can't predict which node a pod will run on, and copying code to every node is cumbersome and hard to maintain. If you're using a hosted Kubernetes installation like GKE, it's even possible that the cluster autoscaler will create and delete nodes automatically, and you won't have an opportunity to manually copy things in.
You need to build your application code into a custom image. That can set the desired WORKDIR, COPY the code in, and RUN any setup commands that are required. Then you need to push that to an image repository, like GCR
docker build -t gcr.io/base-project/my-project:v1 .
docker push gcr.io/base-project/my-project:v1
Once you have that, you can create a minimal Kubernetes Deployment to run it. Set the GCR name of the image you built and pushed as its image:. You will also need a Service to make it accessible, even from other Pods in the same cluster.
Try this (untested yaml, but you will get the idea)
apiVersion: apps/v1
kind: Deployment
metadata:
name: myoh-deployment
labels:
app: myoh
spec:
replicas: 1
selector:
matchLabels:
app: myoh
template:
metadata:
labels:
app: myoh
spec:
initContainers:
- name: init-myoh
image: gcr.io/base-project/myoh:v1
command: ['sh', '-c', "mkdir -p myproject"]
containers:
- name: myoh
image: gcr.io/base-project/myoh:v1
ports:
- containerPort: 8081
volumeMounts:
- mountPath: /projects
name: project-volume
volumes:
- name: test-volume
hostPath:
# directory location on host
path: /data
# this field is optional
type: Directory
I'm trying to build a docker image using DIND with Atlassian Bamboo.
I've created the deployment/ StatefulSet as follows:
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
labels:
app: bamboo
name: bamboo
namespace: csf
spec:
replicas: 1
serviceName: bamboo
revisionHistoryLimit: 10
selector:
matchLabels:
app: bamboo
template:
metadata:
creationTimestamp: null
labels:
app: bamboo
spec:
containers:
- image: atlassian/bamboo-server:latest
imagePullPolicy: IfNotPresent
name: bamboo-server
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
securityContext:
privileged: true
volumeMounts:
- name: bamboo-home
mountPath: /var/atlassian/application-data/bamboo
- mountPath: /opt/atlassian/bamboo/conf/server.xml
name: bamboo-server-xml
subPath: bamboo-server.xml
- mountPath: /var/run
name: docker-sock
volumes:
- name: bamboo-home
persistentVolumeClaim:
claimName: bamboo-home
- configMap:
defaultMode: 511
name: bamboo-server-xml
name: bamboo-server-xml
- name: docker-sock
hostPath:
path: /var/run
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
Note that I've set privileged: true in securityContext to enable this.
However, when trying to run docker images, I get a permission error:
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Post http://%2Fvar%2Frun%2Fdocker.sock/v1.40/containers/create: dial unix /var/run/docker.sock: connect: permission denied.
See '/var/atlassian/application-data/bamboo/appexecs/docker run --help'
Am I missing something wrt setting up DIND?
The /var/run/docker.sock file on the host system is owned by a different user than the user that is running the bamboo-server container process.
Without knowing any details about your cluster, I would assume docker runs as 'root' (UID=0). The bamboo-server runs as 'bamboo', as can be seen from its Dockerfile, which will normally map to a UID in the 1XXX range on the host system. As these users are different and the container process did not receive any specific permissions over the (host) socket, the error is given.
So I think there are two approaches possible:
Or the container process continues to run as the 'bamboo' user, but is given sufficient permissions on the host system to access /var/run/docker.sock. This would normally mean adding the UID the bamboo user maps to on the host system to the docker group on the host system. However, making changes to the host system might or might not be an option depending on the context of your cluster, and is tricky in a cluster context because the pod could migrate to a different node where the changes were not applied and/or the UID changes.
Or the container is changed as to run as a sufficiently privileged user to begin with, being the root user. There are two ways to accomplish this: 1. you extend and customize the Atlassian provided base image to change the user or 2. you override the user the container runs as at run-time by means of the 'runAsUser' and 'runAsGroup' securityContext instructions as specified here. Both should be '0'.
As mentioned in the documentation here
If you want to run docker as non-root user then you need to add it to the docker group.
Create the docker group if it does not exist
$ sudo groupadd docker
Add your user to the docker group.
$ sudo usermod -aG docker $USER
Log out and log back in so that your group membership is re-evaluated.
$ newgrp docker
Verify that you can run docker commands without sudo
$ docker run hello-world
If that doesn't help you can change the permissions of docker socket to be able to connect to the docker daemon /var/run/docker.sock.
sudo chmod 666 /var/run
A better way to handle this is to run a sidecar container - docker:dind, and export DOCKER_HOST=tcp://dind:2375 in the main Bamboo container. This way you will invoke Docker in a dind container and won't need to mount /var/run/docker.sock
I am trying to run an image using Kubernetes with below Dockerfile
FROM centos:6.9
COPY rpms/* /tmp/
RUN yum -y localinstall /tmp/*
ENTERYPOINT service test start && /bin/bash
Now when I try to deploy this image using pod.yml as shown below,
apiVersion: v1
kind: Pod
metadata:
labels:
app: testpod
name: testpod
spec:
containers:
- image: test:v0.2
name: test
imagePullPolicy: Always
volumeMounts:
- mountPath: /data
name: testpod
volumes:
- name: testod
persistentVolumeClaim:
claimName: testpod
Now when I try to create the pod the image goes into a crashloopbackoff. So how I can make the image to wait in /bin/bash on Kubernetes as when I use docker run -d test:v0.2 it work fines and keep running.
You need to attach a terminal to the running container. When starting a pod using kubectl run ... you can use -i --tty to do that. In the pod yml filke, you can add the following, to the container spec to attach tty.
stdin: true
tty: true
You can put a command like tail -f /dev/null to keep your container always be on, this could be done inside your Dockerfile or in your Kubernetes yaml file.
I am trying to start a container-vm Google Compute Engine VM instance with a container created when the machine starts. An example of this you find in this documentation section: Creating containers at time of instance creation.
Everything works fine with the given example:
apiVersion: v1
kind: Pod
metadata:
name: service
spec:
containers:
- name: jillix-service
image: gcr.io/google-containers/busybox
command: ['nc', '-p', '8000', '-l', '-l', '-e', 'echo', 'hello world!']
imagePullPolicy: Always
ports:
- containerPort: 8000
hostPort: 80
but when I try to use instead my own container image, the image it is not working:
apiVersion: v1
kind: Pod
metadata:
name: service
spec:
containers:
- name: jillix-service
image: gcr.io/sigma-cairn-99810/service
imagePullPolicy: Always
ports:
- containerPort: 8000
hostPort: 80
In the working example docker reports the following container images to be on the VM:
$ sudo docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
gcr.io/google_containers/pause 0.8.0 2c40b0526b63 7 months ago 241.7 kB
gcr.io/google-containers/busybox latest 4986bf8c1536 10 months ago 2.433 MB
but when I use my container image, this is missing:
gabriel#container-image-builder:~$ sudo docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
gcr.io/google_containers/pause 0.8.0 2c40b0526b63 7 months ago 241.7 kB
So, I assume that this is the reason why my container is not starting. But why doesn't the VM download my gcr.io/sigma-cairn-99810/service image?
Does it have to do anything with authentication? (When I manually log into the VM and gcloud docker pull, I am prompted to gcloud auth login first, then I can pull my image and docker run it normally and everything works.)
Does the container-vm you started have (at least) the storage "read only" scope?
You can check this with:
curl -H 'Metadata-Flavor: Google' http://metadata.google.internal./computeMetadata/v1/instance/service-accounts/default/scopes