I am trying to learn docker and kubernetes. We can say that I am a beginner. I download containers via docker. I want to deploy three of these containers to kubernetes. How can I do this?
docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
nginx latest 2b7d6430f78d 11 days ago 142MB
ubuntu latest df5de72bdb3b 4 weeks ago 77.8MB
busybox latest 7a80323521cc 5 weeks ago 1.24MB
gcr.io/k8s-minikube/kicbase v0.0.33 b7ab23e98277 5 weeks ago 1.14GB
fedora latest 98ffdbffd207 3 months ago 163MB
hello-world latest feb5d9fea6a5 11 months ago 13.3kB
You need to create a yaml file that contains this configuration for the nginx image:
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
To create the pod, run the following command
kubectl apply -f pathToYourYamlFile
Related
I'm trying to run some docker images on Kubernetes.
docker images
master* $ docker images [15:16:49]
REPOSITORY TAG IMAGE ID CREATED SIZE
usm latest 4dd5245393bf About an hour ago 158MB
kuard latest 497961f486c7 4 days ago 22.9MB
docker container
master* $ docker ps [15:21:40]
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a46850d28303 usm "/docker-entrypoint.…" About an hour ago Up About an hour 0.0.0.0:6061->80/tcp, :::6061->80/tcp usm
88471e086486 gcr.io/k8s-minikube/kicbase:v0.0.32 "/usr/local/bin/entr…" 2 days ago Up 2 hours 127.0.0.1:49157->22/tcp, 127.0.0.1:49156->2376/tcp, 127.0.0.1:49155->5000/tcp, 127.0.0.1:49154->8443/tcp, 127.0.0.1:49153->32443/tcp minikube
Dockerfile
FROM nginx
COPY ./dist /usr/share/nginx/html
EXPOSE 80
kube version
master* $ minikube version [15:37:13]
minikube version: v1.26.0
commit: f4b412861bb746be73053c9f6d2895f12cf78565
When I run kubectl run mypod --image=usm, I get ErrImagePull
How to run the pod with the local docker image?
master* $ kubectl run mypod --image=usm
pod/mypod created
master* $ kubectl get pods [15:07:49]
NAME READY STATUS RESTARTS AGE
mypod 0/1 ErrImagePull 0 6s
I'm trying to set the imagePullPolicy to never
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- image: usm
imagePullPolicy: Never
name: mypod
ports:
- containerPort: 80
name: http
protocol: TCP
master* $ kubectl apply -f kube-pod-usm.yaml [15:55:39]
pod/mypod created
master* $ kubectl get pods [15:55:54]
NAME READY STATUS RESTARTS AGE
mypod 0/1 ErrImageNeverPull 0 42s
You need that image to be available in someplace. Normally this is done through a registry, but for local development, you can just upload your local image to your minikukbe cluster with the following command:
minikube image load image:tag
You may also want to check minikube docker-env which allows pointing your terminal docker-cli to the docker inside minikube in a simple way.
When a Kubernetes cluster creates a new deployment or updates an existing deployment, it needs to pull an image. This is done through the kubelet process on each user node. In order for kubelets to pull this image successfully, they must be accessible from all nodes in the cluster that match the scheduling request.
Edit pod specification and provide the correct registry
If you set the Image pull Policy to Never :
the kubelet does not try fetching the image. If the image is somehow already present locally (in the local registry of Kubernetes), the kubelet attempts to start the container; otherwise, startup fails.
In the Github actions workflow, I am trying to reproduce the following step:
docker run -d --name testmachine --privileged -v /sys/fs/cgroup:/sys/fs/cgroup:ro jrei/systemd-ubuntu:20.04
It does not seem to be working as expected when I configure it on the Github actions workflow
name: test
on:
workflow_dispatch:
push:
jobs:
test:
name: test
runs-on: self-hosted
container:
image: jrei/systemd-ubuntu:20.04
volumes:
- "/sys/fs/cgroup:/sys/fs/cgroup:ro"
options: --privileged
When I do docker ps on the first step, I get the following (which is good - you can see /lib/systemd/systemd in the command):
PS C:\Windows\system32> docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
efe232522c68 jrei/systemd-ubuntu:20.04 "/lib/systemd/systemd" 59 minutes ago Up 59 minutes testmachine
And on the second step I am getting the following (github actions workflow):
50668b9d83fc jrei/systemd-ubuntu:20.04 "tail -f /dev/null" 16 minutes ago Up 16 minutes 8fe5922c3
Any Idea what I am doing wrong?
Thanks
Apoligies if this question is dumb or naive... we are still learning docker. We are running Airflow in docker. Here are the docker images on our GCP compute engine:
ubuntu#our-airflow:~/airflow-dir$ docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
our-airflow_webserver latest aaaaaaaaaaaa 17 minutes ago 968MB
<none> <none> bbbbbbbbbbbb 22 minutes ago 2.13GB
apache/airflow 2.1.4 cccccccccccc 5 weeks ago 968MB
<none> <none> dddddddddddd 2 months ago 2.01GB
python 3.7-slim-buster eeeeeeeeeeee 17 months ago 155MB
postgres 9.6 ffffffffffff 17 months ago 200MB
ubuntu#our-airflow:~/airflow-dir$
dddddddddddd was the image that used to run when we ran docker-compose up from the command line. However, we were testing a new Dockerfile, and built the new image aaaaaaaaaaaa with the tag our-airflow_webserver. dddddddddddd used to have this tag, but it was changed to <none> when we built aaaaaaaaaaaa.
We'd like to run docker-compose up dddddddddddd, however this does not work. We get the error ERROR: No such service: dddddddddddd. How can we create a container using the image dddddddddddd with docker-compose up? Is this possible?
Edit: If I simply run docker run dddddddddddd, I do not get the desired output. I think this is because our docker-compose file is launching all of the requisite services we need for airflow (webserver, scheduler, metadata db).
Edit2: Here's the seemingly relevant webserver part of our docker-compose file:
webserver:
# image:
build:
dockerfile: Dockerfile.Self
context: .
can we simply uncomment image, and set it to image: dddddddddddd and then comment out the build part?
can we simply uncomment image, and set it to image: dddddddddddd
Yes, you can. If you want to start the service with another image you must change the docker-compose.yml file.
and then comment out the build part?
You don't need to comment the build part. The build just takes action when the image specified is not found or the --build option is passed as argument.
If you want to ensure that the image is not gonna be build, just pass the argument --no-build to docker-compose up command. This will avoid to build the image even if it's missing.
Check the docker-compose up docs for further information.
I've used helm create helloworld-chart to create an application using a local docker image I created. i think the issue is that i have the ports all messed up.
DOCKER PIECES
--------------------------
Docker File
FROM busybox
ADD index.html /www/index.html
EXPOSE 8008
CMD httpd -p 8008 -h /www; tail -f /dev/null
(I also have an index.html file in the same directory as my Dockerfile)
Create Docker Image (and publish locally)
docker build -t hello-world .
I then ran this with docker run -p 8080:8008 hello-world and verified I am able to reach it from localhost:8080. (I then stopped that docker container)
I also verified this image was in docker locally with docker image ls and got the output:
REPOSITORY TAG IMAGE ID CREATED SIZE
hello-world latest 8640a285e98e 20 minutes ago 1.23MB
HELM PIECES
--------------------------
Created a helm chart via helm create helloworld-chart.
Edited the files:
values.yaml
# ...elided because left the same as default...
image:
repository: hello-world
tag: latest
pullPolicy: IfNotPresent
# ...elided because left the same as default...
service:
name: hello-world
type: NodePort # Chose this because MiniKube doesn't have LoadBalancer installed
externalPort: 30007
internalPort: 8008
port: 80
service.yaml
# ...elided because left the same as default...
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.port }}
targetPort: {{ .Values.service.internalPort }}
nodePort: {{ .Values.service.externalPort }}
deployment.yaml
# ...elided because left the same as default...
spec:
# ...elided because left the same as default...
containers:
ports:
- name: http
containerPort: {{ .Values.service.internalPort }}
protocol: TCP
I verified this "looked" correct with both helm lint helloworld-chart and helm template ./helloworld-chart
HELM AND MINIKUBE COMMANDS
--------------------------
# Packaging my helm
helm package helloworld-chart
# Installing into Kuberneters (Minikube)
helm install helloworld helloworld-chart-0.1.0.tgz
# Getting an external IP
minikube service helloworld-helloworld-chart
When I do that, it gives me an external ip like http://172.23.13.145:30007 and opens in a browser but just says the site cannot be reached. What do i have mismatched?
UPDATE/MORE INFO
---------------------------------------
When I check the pod, it's in a CrashLoopBackOff state. However, I see nothing in the logs:
kubectl logs -f helloworld-helloworld-chart-6c886d885b-grfbc
Logs:
Hello from Docker!
This message shows that your installation appears to be working correctly.
To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
(amd64)
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.
To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash
Share images, automate workflows, and more with a free Docker ID:
https://hub.docker.com/
For more examples and ideas, visit:
https://docs.docker.com/get-started/
I'm not sure why it's exiting.
The issue was that Minikube was actually looking in the public Docker image repo and finding something also called hello-world. It was not finding my docker image since "local" to minikube is not local to the host computer's docker. Minikube has its own docker running internally.
You have to add your image to minikube's local repo: minikube cache add hello-world:latest.
You need to change the pull policy: imagePullPolicy: Never
I am really having trouble debugging this and can use some help. I am successfully staring a kubernetes service and deployment using a working docker image.
My service file:
apiVersion: v1
kind: Service
metadata:
name: auth-svc
labels:
app: auth_v1
spec:
type: NodePort
ports:
- port: 3000
nodePort: 30000
protocol: TCP
selector:
app: auth_v1
Deploy File:
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-deploy
labels:
app: auth_v1
spec:
revisionHistoryLimit: 5
minReadySeconds: 10
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
replicas: 3
selector:
matchLabels:
app: auth_v1
template:
metadata:
labels:
app: auth_v1
spec:
containers:
- name: auth-pod
image: index.docker.io/XXX/auth
command: [ "yarn", "start-staging" ]
imagePullPolicy: Always
ports:
- containerPort: 3000
imagePullSecrets:
- name: myregistrykey
kubectl get pods shows that the pods are up and running. I have tested jumping into the pod/conatiner with shell and tried running my application and it works. When I run kubectl describe auth-deploy I am seeing a container listed as auth-pod. However, I am not seeing any containers when I run docker ps or docker ps -a. Also, the logs for my pods show nothing. Is there something I am doing wrong?
For reference, here is my Dockerfile:
FROM node:8.11.2-alpine AS build
LABEL maintainer="info#XXX.com"
# Copy Root Dir & Set Working Dir
COPY . /src
WORKDIR /src
# Build & Start Our App
RUN apk update
RUN apk add --update alpine-sdk
RUN apk add --update python
RUN yarn install
RUN yarn build-staging
# Build Production Image Using Node Container
FROM node:8.11.2-alpine AS production
# Copy Build to Image
COPY --from=build /src/.next /src/.next/
COPY --from=build /src/production-server /src/production-server/
COPY --from=build /src/static /src/static/
COPY --from=build /src/package.json /src
WORKDIR /src
# Install Essential Pacakges & Start App
RUN apk update
RUN apk add --update alpine-sdk
RUN apk add --update python
RUN yarn install
# Expose Ports Needed
EXPOSE 3000
VOLUME [ "/src/log" ]
# Start App
CMD [ "yarn", "start-staging" ]
Is it possible that you are running docker ps on the K8s-master instead of where the pods are located?
You can find out where your pods are running by running the command below:
$ kubectl describe pod auth-deploy
It should return something similar to below (in my case it's a percona workload):
$ kubectl describe pod percona
Name: percona-b98f87dbd-svq64
Namespace: default
Node: ip-xxx-xx-x-xxx.us-west-2.compute.internal/xxx.xx.x.xxx
Get the IP, SSH into the node, and run docker ps locally from the node your container is located.
$ docker ps | grep percona
010f3d529c55 percona "docker-entrypoint.s…" 7 minutes ago Up 7 minutes k8s_percona_percona-b98f87dbd-svq64_default_4aa2fe83-861a-11e8-9d5f-061181005f56_0
616d70e010bc k8s.gcr.io/pause-amd64:3.1 "/pause" 8 minutes ago Up 7 minutes k8s_POD_percona-b98f87dbd-svq64_default_4aa2fe83-861a-11e8-9d5f-061181005f56_0
Another possibility is that you might be using different container runtime such as rkt, containerd, and lxd instead of docker.
Kubernetes pods are made of grouped containers and running on the dedicated node.
Kubernetes are managing directions where to create pods and
their lifecycle.
Kubernetes configuration consists of worker nodes and the master server.
The master server is able to connect to nodes, create containers,
and bond them into pods. The master node is designed to run only managing commands like kubectl, cluster state database etcd,
and others daemons required to keep cluster up and running.
docker ps
shows nothing in this case.
To get list of running pods:
kubectl get pods
You can then connect to pod already running on node:
kubectl attach -i <podname>
Back to your question.
If you are interested in how Kubernetes are working with containers including your application image and Kubernetes infrastructure,
you have to obtain node’s IP address first:
kubectl describe pod <podname> | grep ^Node:
or by:
kubectl get pods -o wide
Next connect to the node via ssh and then:
docker ps
You will see there are containers including the one you are looking for.