How do I run a docker image that I built locally on Google Container Engine?
You can push your image to Google Container Registry and reference them from your pod manifest.
Detailed instructions
Assuming you have a DOCKER_HOST properly setup , a GKE cluster running the last version of Kubernetes and Google Cloud SDK installed.
Setup some environment variables
gcloud components update kubectl
gcloud config set project <your-project>
gcloud config set compute/zone <your-cluster-zone>
gcloud config set container/cluster <your-cluster-name>
gcloud container clusters get-credentials <your-cluster-name>
Tag your image
docker tag <your-image> gcr.io/<your-project>/<your-image>
Push your image
gcloud docker push gcr.io/<your-project>/<your-image>
Create a pod manifest for your container: my-pod.yaml
id: my-pod
kind: Pod
apiVersion: v1
desiredState:
manifest:
containers:
- name: <container-name>
image: gcr.io/<your-project>/<your-image>
...
Schedule this pod
kubectl create -f my-pod.yaml
Repeat from step (4) for each pod you want to run. You can have multiple definitions in a single file using a line with --- as delimiter.
The setup I use is to deploy my own docker registry combined with ssh port forwarding. For that purpose I set up a ssh server in the cluster and use ~/.ssh/config to configure a port forward to the registry.
Also I use jenkins to build the images right in the cloud.
Step 1: Specify the container in which you have to work on
gcloud container clusters get-credentials [$cluster_name]
Step 2: Tag the docker image you want to run
docker tag nginx gcr.io/first-project/nginx
Step 3: Push image
gcloud docker push gcr.io/first-project/nginx
Step4:Create yaml file (test.yaml)
apiVersion: v1
kind: Pod
containers:
- name : nginx1
image: gcr.io/first-project/nginx
Step 5 : Create the pod
kubectl create –f test.yaml
You could copy the registry authentication key of your private docker registry to the .dockercfg file in the root directory of the minions right before starting the pods.
Or run docker login on minions before starting.
docker login --username=<> --password=<> --email=<> <DockerServer>
Referring to the private docker image in the pod configuration should then work as expected.
Related
Let's say I have a deployment that looks something like this:
apiVersion: v1
kind: Deployment
metadata:
name: myapp-deployment
spec:
replicas: 3
template:
kind: Pod
metadata: myapp-pod
labels:
apptype: front-end
containers:
- name: nginx
containers: <--what is supposed to go here?-->
How do I properly build a container using an existing Dockerfile without having to push a build image up to Docker hub?
Kubernetes can't build images. You all but are required to use an image registry. This isn't necessarily Docker Hub: the various public-cloud providers (AWS, Google, Azure) all have their own registry offerings, there are some third-party ones out there, or you can run your own.
If you're using a cloud-hosted Kubernetes installation (EKS, GKE, ...) the "right" way to do this is to push your built image to the corresponding image registry (ECR, GCR, ...) before you run it.
docker build -t gcr.io/my/image:20201116 .
docker push gcr.io/my/image:20201116
containers:
- name: anything
image: gcr.io/my/image:20201116
There are some limited exceptions to this in a very local development environment. For example, if you're using Minikube as a local Kubernetes installation, you can point docker commands at it, so that docker build builds an image inside the Kubernetes context.
eval $(minikube docker-env)
docker build -t my-image:20201116 .
containers:
- name: anything
image: my-image:20201116 # matches `docker build -t` option
imagePullPolicy: Never # since you manually built it inside the minikube Docker
https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#creating-a-deployment check this out.
Make sure you give a good read at the documentation :)
I've used helm create helloworld-chart to create an application using a local docker image I created. i think the issue is that i have the ports all messed up.
DOCKER PIECES
--------------------------
Docker File
FROM busybox
ADD index.html /www/index.html
EXPOSE 8008
CMD httpd -p 8008 -h /www; tail -f /dev/null
(I also have an index.html file in the same directory as my Dockerfile)
Create Docker Image (and publish locally)
docker build -t hello-world .
I then ran this with docker run -p 8080:8008 hello-world and verified I am able to reach it from localhost:8080. (I then stopped that docker container)
I also verified this image was in docker locally with docker image ls and got the output:
REPOSITORY TAG IMAGE ID CREATED SIZE
hello-world latest 8640a285e98e 20 minutes ago 1.23MB
HELM PIECES
--------------------------
Created a helm chart via helm create helloworld-chart.
Edited the files:
values.yaml
# ...elided because left the same as default...
image:
repository: hello-world
tag: latest
pullPolicy: IfNotPresent
# ...elided because left the same as default...
service:
name: hello-world
type: NodePort # Chose this because MiniKube doesn't have LoadBalancer installed
externalPort: 30007
internalPort: 8008
port: 80
service.yaml
# ...elided because left the same as default...
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.port }}
targetPort: {{ .Values.service.internalPort }}
nodePort: {{ .Values.service.externalPort }}
deployment.yaml
# ...elided because left the same as default...
spec:
# ...elided because left the same as default...
containers:
ports:
- name: http
containerPort: {{ .Values.service.internalPort }}
protocol: TCP
I verified this "looked" correct with both helm lint helloworld-chart and helm template ./helloworld-chart
HELM AND MINIKUBE COMMANDS
--------------------------
# Packaging my helm
helm package helloworld-chart
# Installing into Kuberneters (Minikube)
helm install helloworld helloworld-chart-0.1.0.tgz
# Getting an external IP
minikube service helloworld-helloworld-chart
When I do that, it gives me an external ip like http://172.23.13.145:30007 and opens in a browser but just says the site cannot be reached. What do i have mismatched?
UPDATE/MORE INFO
---------------------------------------
When I check the pod, it's in a CrashLoopBackOff state. However, I see nothing in the logs:
kubectl logs -f helloworld-helloworld-chart-6c886d885b-grfbc
Logs:
Hello from Docker!
This message shows that your installation appears to be working correctly.
To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
(amd64)
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.
To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash
Share images, automate workflows, and more with a free Docker ID:
https://hub.docker.com/
For more examples and ideas, visit:
https://docs.docker.com/get-started/
I'm not sure why it's exiting.
The issue was that Minikube was actually looking in the public Docker image repo and finding something also called hello-world. It was not finding my docker image since "local" to minikube is not local to the host computer's docker. Minikube has its own docker running internally.
You have to add your image to minikube's local repo: minikube cache add hello-world:latest.
You need to change the pull policy: imagePullPolicy: Never
What i want to achieve:
able to push docker image to (insecure) artifactory docker repo from a jenkins pipeline which is running on kubernetes (jnlp).
What i am trying:
I am using kubernetes plugin on jenkins (running on k8s) which is running docker:dind container as slave agent. When i push it fails with certificate error (x509) since its an insecure artifactory repo. Hence To push to insecure artifactory i want to update --insecure-registries in daemon.json of docker client.
But unfortunately, even after updating the daemon.json inside docker:dind it is not taking effect as the docker client used is from underlying node where k8s is running. (minikube in my case) and docker:dind is used as daemon
So i am unable to add my artifactory repo --insecure-registries inside docker client unless i update the daemon.json on the k8s cluster docker client (on minikube)
What i want to do:
Hence I want to change docker client from k8s node(minikube) to another docker slave running inside the kubernetes plugin where i can configure daemon.json.
Can you help me do that ? or please propose a better way to fix this.
Instead of using docker-in-docker, you can let the jnlp-slave use your host's docker daemon if you mount /var/run/docker.sock to it. Then you can edit your host's /etc/docker/daemon.json to add your insecure registry.
Assuming your defining your jnlp-slave template in the pipeline, you can do:
pipeline {
agent {
kubernetes {
yaml """
apiVersion: v1
kind: Pod
metadata:
label:
jenkins: slave
spec:
containers:
- name: jnlp
image: registry/jnlp-slave:latest
volumeMounts:
- name: docker
mountPath: /var/run/docker.sock
volumes:
- name: docker
hostPath: { path: /var/run/docker.sock }
"""```
i have a Maven project on my local machine and a docker image in my repo and im using gitlab and jenkins to automate builds, and now with current setup I want to continously deploy to Kubernetes. I have no idea on how this is done. Any input will be appreciated.
my yaml file looks like this
apiVersion: v1
kind: Pod
metadata:
name: client-pod
labels:
component: web
spec:
containers:
- name: client
image: <image>
ports:
- containerPort: 3000
The easiest way will be to set the name of the new image. See here:
kubectl --record deployment.apps/nginx-deployment set image deployment.v1.apps/nginx-deployment nginx=nginx:1.9.1
You will need to have access from your gitlab/Jenkins to your cluster.
Another option will be to use some kind of kubernetes deployment tool such as helm, or any other solution. This case will help you in more complicated scenarios where you also want to update your configuration files (k8s yamls).
Once the image is built and pushed to the container repository you just have to set the new image
>>> docker build -t repo-name/whatever-app:<version>
>>> docker push repo-name/whatever-app:<version>
>>> kubectl set image deployment/my-deployment mycontainer=repo-name/whatever-app:<version>
You can use this exemplary jenkins pipeline, to build and deploy your dockerized maven-app to Kubernetes with helm. It consists of following steps:
Git clone and setup
Build and local tests
Publish Docker and Helm
Deploy to dev and test
Deploy to staging and test
Optionally deploy to production and test
It think it's a nice starting point to realize CI/CD with Jenkins & Kubernets.
I am trying to implement CI/CD pipeline for my spring boot microservice deployment. Here I have some sample microservices. When I am exploring about Kubernetes , I found that pods, services, replica sets/ controller, statefulsets etc. I understood those Kubernetes terminologies properly. And I am planning to use Docker hub for my image registry.
My Requirement
When there is a commit made to my SVN code repository, then the Jenkins need to pull code from Subversion repository and need to build the project , create docker image, push into Docker hub - as mentioned earlier. And after that need to deploy into my test environment from Dockerhub by pulling by Jenkins.
My Confusion
When am I creating services and pods, how I can define the docker image path within pod/services / statefulsets? Since it pulling from Docker hub for deployment.
Can I directly add kubectl command within Jenkins pipeline schedule job? How can I use kubectl command for Kubernetes deployment?
Jenkins can do anything you can do given that the tools are installed and accessible. So an easy solution is to install docker and kubectl on Jenkins and provide him with the correct kube config so he can access the cluster. So if your host can use kubectl you can have a look at the $HOME/.kube/config file.
So in your job you can just use kubectl like you do from your host.
Regarding the images from Docker Hub:
Docker Hub is the default Docker Registry for Docker anyway so normally there is no need to change anything in your cluster only if you want to use your own Private Hosted Registry. If you are running your cluster at any cloud provider I would use there Docker registries because they are better integrated.
So this part of a deployment will pull nginx from Docker Hub no need to specify anything special for it:
spec:
containers:
- name: nginx
Image: nginx:1.7.9
So ensure Jenkins can do the following things from command line:
build Docker images
Push Docker Images (make sure you called docker login on Jenkins)
Access your cluster via kubectl get pods
So an easy pipeline needs to simply do this steps:
trigger on SVN change
checkout code
create a unique version which could be Build number, SVN Revision, Date)
Build / Test
Build Docker Image
tag Docker Image with unique version
push Docker Image
change image line in Kubernetes deployment.yaml to newly build version (if your are using Jenkins Pipeline you can use readYaml, writeYaml to achive this)
call kubectl apply -f deployment.yaml
Depending on your build system and languages used there are some useful tools which can help building and pushing the Docker Image and ensuring a unique tag. For example for Java and Maven you can use Maven CI Friendly Versions with any maven docker plugin or jib.
To create deployment you need to create a yaml file.
In the yaml file you the row:
image: oronboni/serviceb
Leads you to the container that in this case in DockerHub:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: serviceb
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: serviceb
template:
metadata:
labels:
app: serviceb
spec:
containers:
- name: serviceb
image: oronboni/serviceb
ports:
- containerPort: 5002
I strongly suggest that you will see the kubernetes deployment webinar in the link below:
https://m.youtube.com/watch?v=_vHTaIJm9uY
Good luck.