Docker Image: -
docker images | grep -i "gcc"
gcc-docker latest 84c4359e6fc9 21 mites ago 1.37GB
docker run -it gcc-docker:latest
hello,world
Kubernetes pod deployed:-
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned default/hello-world to master-node
Normal Pulling 4s kubelet, master-node Pulling image "gcc-docker:latest"
Warning Failed 0s kubelet, master-node Failed to pull image "gcc-docker:latest": rpc error: code = Unknown desc = Erroresponse from daemon: pull access denied for gcc-docker, repository does not exist or may require 'docker login': denied: requested acce to the resource is denied
Warning Failed 0s kubelet, master-node Error: ErrImagePull
Normal BackOff 0s kubelet, master-node Back-off pulling image "gcc-docker:latest"
Warning Failed 0s kubelet, master-node Error: ImagePullBackOff
-->yaml files used to deploy pod
apiVersion: v1
kind: Pod
metadata:
name: hello-world
labels:
type: hello-world
spec:
containers:
- name: hello-world
image: gcc-docker:latest
command: ['sh', '-c', 'echo Hello Kubernetes! && sleep 60']
ports:
- containerPort: 80
I tried pulling gcc-docker and got the same error.You may have this image present on your system already and now its not on dockerhub.
if you know the repository for this image, try to use the same and for authentication create secrets of docker type and use them as image pull secrets.
Also, one more thing you are running the container on the master node, and I assume it's minikube or some local setup.
Minikube uses a dedicated VM to run Kubernetes which is not the same as the machine on which you have installed minikube.
So images available on your laptop will not be available to minikube.
So Jenkins is installed inside the cluster with this official helm chart. And this is my installed plugins as per helm release values:
installPlugins:
- kubernetes:1.18.1
- workflow-job:2.33
- workflow-aggregator:2.6
- credentials-binding:1.19
- git:3.11.0
- blueocean:1.19.0
my Jenkinsfile relies on the following pod template to spin up slaves:
kind: Pod
spec:
# dnsConfig:
# options:
# - name: ndots
# value: "1"
containers:
- name: dind
image: docker:19-dind
command:
- cat
tty: true
volumeMounts:
- name: dockersock
readOnly: true
mountPath: /var/run/docker.sock
resources:
limits:
cpu: 500m
memory: 512Mi
volumes:
- name: dockersock
hostPath:
path: /var/run/docker.sock
Slaves (pod /dind container) starts nicely as expected whenever there is new Build.
However, it broke at the step of "docker build" in ( Jenkinsfile pipeline
docker build -t ... ) and it breaks there :
Step 16/24 : RUN ../gradlew clean bootJar
---> Running in f14b6418b3dd
Downloading https://services.gradle.org/distributions/gradle-5.5-all.zip
Exception in thread "main" java.net.UnknownHostException: services.gradle.org
at java.base/java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:220)
at java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:403)
at java.base/java.net.Socket.connect(Socket.java:591)
at java.base/sun.security.ssl.SSLSocketImpl.connect(SSLSocketImpl.java:285)
at java.base/sun.security.ssl.BaseSSLSocketImpl.connect(BaseSSLSocketImpl.java:173)
at java.base/sun.net.NetworkClient.doConnect(NetworkClient.java:182)
at java.base/sun.net.www.http.HttpClient.openServer(HttpClient.java:474)
at java.base/sun.net.www.http.HttpClient.openServer(HttpClient.java:569)
at java.base/sun.net.www.protocol.https.HttpsClient.<init>(HttpsClient.java:265)
at java.base/sun.net.www.protocol.https.HttpsClient.New(HttpsClient.java:372)
at java.base/sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.getNewHttpClient(AbstractDelegateHttpsURLConnection.java:191)
at java.base/sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1187)
at java.base/sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:1081)
at java.base/sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:177)
at java.base/sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1587)
at java.base/sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1515)
at java.base/sun.net.www.protocol.https.HttpsURLConnectionImpl.getInputStream(HttpsURLConnectionImpl.java:250)
at org.gradle.wrapper.Download.downloadInternal(Download.java:67)
at org.gradle.wrapper.Download.download(Download.java:52)
at org.gradle.wrapper.Install$1.call(Install.java:62)
at org.gradle.wrapper.Install$1.call(Install.java:48)
at org.gradle.wrapper.ExclusiveFileAccessManager.access(ExclusiveFileAccessManager.java:69)
at org.gradle.wrapper.Install.createDist(Install.java:48)
at org.gradle.wrapper.WrapperExecutor.execute(WrapperExecutor.java:107)
at org.gradle.wrapper.GradleWrapperMain.main(GradleWrapperMain.java:63)
The command '/bin/sh -c ../gradlew clean bootJar' returned a non-zero code:
At the first galance, I thought it's DNS resolution issue with the Slave container (docker:19-dind) since it is alpine.
That's why I debug its /etc/resolv.conf by adding sh "cat /etc/resolv.conf" in the Jenkinsfile.
I got :
nameserver 172.20.0.10
search cicd.svc.cluster.local svc.cluster.local cluster.local ap-southeast-1.compute.internal
options ndots:5
I removed the last line options ndots:5 as per recommendation of many thread on the internet.
But it does not fix the issue. 😔
I thought again and again and I realized that the container responsible for this error is not the Slave (docker:19-dind), instead, it is the intermediate containers that are opened to satisfy docker build.
As consequence, I added RUN cat /etc/resolv.conf as another layer in the Dockerfile (which starts by FROM gradle:5.5-jdk11).
Now, the resolv.conf is different :
Step 15/24 : RUN cat /etc/resolv.conf
---> Running in 91377c9dd519
; generated by /usr/sbin/dhclient-script
search ap-southeast-1.compute.internal
options timeout:2 attempts:5
nameserver 10.0.0.2
Removing intermediate container 91377c9dd519
---> abf33839df9a
Step 16/24 : RUN ../gradlew clean bootJar
---> Running in f14b6418b3dd
Downloading https://services.gradle.org/distributions/gradle-5.5-all.zip
Exception in thread "main" java.net.UnknownHostException: services.gradle.org
Basically, it is a different nameserver 10.0.0.2 than the nameserver of the slave container 172.20.0.10. There is NO ndots:5 in resolv.conf this intermediate container.
I was confused after all these debugging steps and a lot of attempts.
Architecture
Jenkins Server (Container )
||
(spin up slaves)
||__ SlaveA (Container, image: docker:19-dind)
||
( run "docker build" )
||
||_ intermediate (container, image: gradle:5.5-jdk11 )
Just add --network=host to docker build or docker run.
docker build --network=host foo/bar:latest .
Found the answer here
I have an ASP.Net Core Web API 2.2 project that runs normally on my local Docker Desktop. I'm trying to run it on Azure's AKS, but it won't run there, and I can't understand why.
Below is my PowerShell script that I use to publish my project into a app directory that will be later inserted into the container:
Remove-Item ..\..\..\..\projects\MyProject.Selenium.Commom\src\Selenium.API\bin\Release\* -Force -Recurse
dotnet publish ..\..\..\..\projects\MyProject.Selenium.Commom\src\Selenium.Comum.sln -c Release -r linux-musl-x64
$path = (Get-Location).ToString() + "\app"
if (Test-Path($path))
{
Remove-Item -Path $path -Force -Recurse
}
New-Item -ItemType Directory -Force app
Get-ChildItem ..\..\..\..\projects\MyProject.Selenium.Commom\src\Selenium.API\bin\Release\netcoreapp2.2\linux-musl-x64\publish\* | Copy-Item -Destination .\app -Recurse
Here is my Dockerfile
# Build runtime image
FROM mcr.microsoft.com/dotnet/core/runtime:2.2-alpine3.9
WORKDIR /app /app
WORKDIR /app
ENTRYPOINT ["dotnet", "Selenium.API.dll"]
Below is my Docker build command:
docker build -t mylocaldocker/selenium-web-app:latest -t mylocaldocker/selenium-web-app:v0.0.2 .
And my Docker run command
docker run --name selweb --detach --rm -p 85:80 mylocaldocker/selenium-web-app:latest
Everything spins up nice and smooth, and I'm able to send requests locally on port 85 without an issue (port 80 is being used by IIS)
However, doing similar procedures on Azure's AKS, the container won't start. I use the identical PowerShell script to publish my application, and the dockerfile is identical as well. My build command changes so that I can push to Azure's Docker Registry:
docker build -t myproject.azurecr.io/selenium-web-app:latest -t myproject.azurecr.io/selenium-web-app:v0.0.1 .
I login to the Azure Docker Registry, and push the image to it:
docker push myproject.azurecr.io/selenium-web-app:latest
I've already created my AKS cluster and gave permission to pull images from my registry. I try to run the image on AKS using the command:
kubectl run seleniumweb --image myproject.azurecr.io/selenium-web-app:latest --port 80
And I get the response
deployment.apps "seleniumweb" created
However, when I get the running pods:
kubectl get pods
I get an error Status on my pod
NAME READY STATUS RESTARTS AGE
seleniumweb-7b5f645698-9g7f6 0/1 Error 4 1m
When I get the logs from the pod:
kubectl logs seleniumweb-7b5f645698-9g7f6
I get this back:
Did you mean to run dotnet SDK commands? Please install dotnet SDK from:
https://go.microsoft.com/fwlink/?LinkID=798306&clcid=0x409
Below is the result of kubectl describe for the pod:
kubectl describe pods
Name: seleniumweb-7b5f645698-9g7f6
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: aks-agentpool-41564776-0/10.240.0.4
Start Time: Sun, 02 Jun 2019 11:40:47 -0300
Labels: pod-template-hash=7b5f645698
run=seleniumweb
Annotations: <none>
Status: Running
IP: 10.240.0.25
Controlled By: ReplicaSet/seleniumweb-7b5f645698
Containers:
seleniumweb:
Container ID: docker://1d548f4934632efb0b7c5a59dd0ac2bd173f2ee8fa5196b45d480fb10e88a536
Image: myproject.azurecr.io/selenium-web-app:latest
Image ID: docker-pullable://myproject.azurecr.io/selenium-web-app#sha256:97e2915a8b43aa8e726799b76274bb9b5b852cb6c78a8630005997e310cfd41a
Port: 80/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 145
Started: Sun, 02 Jun 2019 11:43:39 -0300
Finished: Sun, 02 Jun 2019 11:43:39 -0300
Ready: False
Restart Count: 5
Environment:
KUBERNETES_PORT_443_TCP_ADDR: myprojectus-dns-54302b78.hcp.eastus2.azmk8s.io
KUBERNETES_PORT: tcp://myprojectus-dns-54302b78.hcp.eastus2.azmk8s.io:443
KUBERNETES_PORT_443_TCP: tcp://myprojectus-dns-54302b78.hcp.eastus2.azmk8s.io:443
KUBERNETES_SERVICE_HOST: myprojectus-dns-54302b78.hcp.eastus2.azmk8s.io
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-mhvfv (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-mhvfv:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-mhvfv
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 5m default-scheduler Successfully assigned default/seleniumweb-7b5f645698-9g7f6 to aks-agentpool-41564776-0
Normal Created 4m (x4 over 5m) kubelet, aks-agentpool-41564776-0 Created container
Normal Started 4m (x4 over 5m) kubelet, aks-agentpool-41564776-0 Started container
Normal Pulling 3m (x5 over 5m) kubelet, aks-agentpool-41564776-0 pulling image "myproject.azurecr.io/selenium-web-app:latest"
Normal Pulled 3m (x5 over 5m) kubelet, aks-agentpool-41564776-0 Successfully pulled image "myproject.azurecr.io/selenium-web-app:latest"
Warning BackOff 20s (x24 over 5m) kubelet, aks-agentpool-41564776-0 Back-off restarting failed container
And I don't understand why, since everything runs fine on my local Docker. Any help would be greatly appreciated. Thanks
That Dockerfile looks funny. It doesn't do anything. WORKDIR just "sets the working directory for any RUN, CMD, ENTRYPOINT, COPY and ADD instructions that follow it in the Dockerfile" (from docs.docker.com). So you're setting the working directory twice, then nothing else. And the entrypoint would then point to a nonexistent .dll since you never copied it over. I think you want to delete the first WORKDIR command and add this after the remaining WORKDIR command:
COPY . ./
Even better, use a two stage build so it builds on docker, then copies the build to a runtime image that is published.
I don't know why docker run is working locally for you. Is it picking up an old image somehow? Based on your Dockerfile, it shouldn't run.
Intention is to execute gatling perf tests from command line .Equivalent docker command is
docker run --rm -w /opt/gatling-fundamentals/
tarunkumard/tarungatlingscript:v1.0
./gradlew gatlingRun-simulations.RuntimeParameters -DUSERS=500 -DRAMP_DURATION=5 -DDURATION=30
Now to map above docker run in Kubernetes using kubectl, I have created a pod for which gradlewcommand.yaml file is below
apiVersion: v1
kind: Pod
metadata:
name: gradlecommandfromcommandline
labels:
purpose: gradlecommandfromcommandline
spec:
containers:
- name: gradlecommandfromcommandline
image: tarunkumard/tarungatlingscript:v1.0
workingDir: /opt/gatling-fundamentals/
command: ["./gradlew"]
args: ["gatlingRun-simulations.RuntimeParameters", "-DUSERS=500", "-
DRAMP_DURATION=5", "-DDURATION=30"]
restartPolicy: OnFailure
Now pod is created using below command:-
kubectl apply -f gradlewcommand.yaml
Now comes my actual requirement or question that how do i run or trigger kubectl run command so as to run container inside the above pod created? ,mind you pod name is gradlecommandfromcommandline
Here is the command which solves the problem:
kubectl exec gradlecommandfromcommandline -- \
./gradlew gatlingRun-simulations.RuntimeParameters \
-DUSERS=500 -DRAMP_DURATION=5 -DDURATION=30
I am using kubernetes on a single machine for testing, I have built a custom image from the nginx docker image, but when I try to use the image in kubernetes I get an image pull error?????
MY POD YAML
kind: Pod
apiVersion: v1
metadata:
name: yumserver
labels:
name: frontendhttp
spec:
containers:
- name: myfrontend
image: my/nginx:latest
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: mypd
imagePullSecrets:
- name: myregistrykey
volumes:
- name: mypd
persistentVolumeClaim:
claimName: myclaim-1
MY KUBERNETES COMMAND
kubectl create -f pod-yumserver.yaml
THE ERROR
kubectl describe pod yumserver
Name: yumserver
Namespace: default
Image(s): my/nginx:latest
Node: 127.0.0.1/127.0.0.1
Start Time: Tue, 26 Apr 2016 16:31:42 +0100
Labels: name=frontendhttp
Status: Pending
Reason:
Message:
IP: 172.17.0.2
Controllers: <none>
Containers:
myfrontend:
Container ID:
Image: my/nginx:latest
Image ID:
QoS Tier:
memory: BestEffort
cpu: BestEffort
State: Waiting
Reason: ErrImagePull
Ready: False
Restart Count: 0
Environment Variables:
Conditions:
Type Status
Ready False
Volumes:
mypd:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: myclaim-1
ReadOnly: false
default-token-64w08:
Type: Secret (a secret that should populate this volume)
SecretName: default-token-64w08
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
13s 13s 1 {default-scheduler } Normal Scheduled Successfully assigned yumserver to 127.0.0.1
13s 13s 1 {kubelet 127.0.0.1} Warning MissingClusterDNS kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. Falling back to DNSDefault policy.
12s 12s 1 {kubelet 127.0.0.1} spec.containers{myfrontend} Normal Pulling pulling image "my/nginx:latest"
8s 8s 1 {kubelet 127.0.0.1} spec.containers{myfrontend} Warning Failed Failed to pull image "my/nginx:latest": Error: image my/nginx:latest not found
8s 8s 1 {kubelet 127.0.0.1} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "myfrontend" with ErrImagePull: "Error: image my/nginx:latest not found"
So you have the image on your machine aready. It still tries to pull the image from Docker Hub, however, which is likely not what you want on your single-machine setup. This is happening because the latest tag sets the imagePullPolicy to Always implicitly. You can try setting it to IfNotPresent explicitly or change to a tag other than latest. – Timo Reimann Apr 28 at 7:16
For some reason Timo Reimann did only post this above as a comment, but it definitely should be the official answer to this question, so I'm posting it again.
Run eval $(minikube docker-env) before building your image.
Full answer here: https://stackoverflow.com/a/40150867
This should work irrespective of whether you are using minikube or not :
Start a local registry container:
docker run -d -p 5000:5000 --restart=always --name registry registry:2
Do docker images to find out the REPOSITORY and TAG of your local image. Then create a new tag for your local image :
docker tag <local-image-repository>:<local-image-tag> localhost:5000/<local-image-name>
If TAG for your local image is <none>, you can simply do:
docker tag <local-image-repository> localhost:5000/<local-image-name>
Push to local registry :
docker push localhost:5000/<local-image-name>
This will automatically add the latest tag to localhost:5000/<local-image-name>.
You can check again by doing docker images.
In your yaml file, set imagePullPolicy to IfNotPresent :
...
spec:
containers:
- name: <name>
image: localhost:5000/<local-image-name>
imagePullPolicy: IfNotPresent
...
That's it. Now your ImagePullError should be resolved.
Note: If you have multiple hosts in the cluster, and you want to use a specific one to host the registry, just replace localhost in all the above steps with the hostname of the host where the registry container is hosted. In that case, you may need to allow HTTP (non-HTTPS) connections to the registry:
5 (optional). Allow connection to insecure registry in worker nodes:
sudo echo '{"insecure-registries":["<registry-hostname>:5000"]}' > /etc/docker/daemon.json
just add imagePullPolicy to your deployment file
it worked for me
spec:
containers:
- name: <name>
image: <local-image-name>
imagePullPolicy: Never
The easiest way to further analysis ErrImagePull problems is to ssh into the node and try to pull the image manually by doing docker pull my/nginx:latest. I've never set up Kubernetes on a single machine but could imagine that the Docker daemon isn't reachable from the node for some reason. A handish pull attempt should provide more information.
If you are using a vm driver, you will need to tell Kubernetes to use the Docker daemon running inside of the single node cluster instead of the host.
Run the following command:
eval $(minikube docker-env)
Note - This command will need to be repeated anytime you close and restart the terminal session.
Afterward, you can build your image:
docker build -t USERNAME/REPO .
Update, your pod manifest as shown above and then run:
kubectl apply -f myfile.yaml
in your case your yaml file should have
imagePullPolicy: Never
see below
kind: Pod
apiVersion: v1
metadata:
name: yumserver
labels:
name: frontendhttp
spec:
containers:
- name: myfrontend
image: my/nginx:latest
imagePullPolicy: Never
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: mypd
imagePullSecrets:
- name: myregistrykey
volumes:
- name: mypd
persistentVolumeClaim:
claimName: myclaim-1
found this here
https://keepforyourself.com/docker/run-a-kubernetes-pod-locally/
Are you using minikube on linux? You need to install docker ( I think), but you don't need to start it. Minikube will do that. Try using the KVM driver with this command:
minikube start --vm-driver kvm
Then run the eval $(minikube docker-env) command to make sure you use the minikube docker environment. build your container with a tag build -t mycontainername:version .
if you then type docker ps you should see a bunch of minikube containers already running.
kvm utils are probably already on your machine, but they can be installed like this on centos/rhel:
yum install qemu-kvm qemu-img virt-manager libvirt libvirt-python
Make sure that your "Kubernetes Context" in Docker Desktop is actually a "docker-desktop" (i.e. not a remote cluster).
(Right click on Docker icon, then select "Kubernetes" in menu)
All you need to do is just do a docker build from your dockerfile, or get all the images on the nodes of your cluster, do a suitable docker tag and create the manifest.
Kubernetes doesn't directly pull from the registry. First it searches for the image on local storage and then docker registry.
Pull latest nginx image
docker pull nginx
docker tag nginx:latest test:test8970
Create a deployment
kubectl run test --image=test:test8970
It won't go to docker registry to pull the image. It will bring up the pod instantly.
And if image is not present on local machine it will try to pull from docker registry and fail with ErrImagePull error.
Also if you change the imagePullPolicy: Never. It will never look for the registry to pull the image and will fail if image is not found with error ErrImageNeverPull.
kind: Deployment
metadata:
labels:
run: test
name: test
spec:
replicas: 1
selector:
matchLabels:
run: test
template:
metadata:
creationTimestamp: null
labels:
run: test
spec:
containers:
- image: test:test8070
name: test
imagePullPolicy: Never
Adding another answer here as the above gave me enough to figure out the cause of my particular instance of this issue. Turns out that my build process was missing the tagging needed to make :latest work. As soon as I added a <tags> section to my docker-maven-plugin configuration in my pom.xml, everything was hunky-dory. Here's some example configuration:
<plugin>
<groupId>io.fabric8</groupId>
<artifactId>docker-maven-plugin</artifactId>
<version>0.27.2</version>
<configuration>
<images>

</images>
</configuration>
</plugin>
ContainerD (and Windows)
I had the same error, while trying to run a custom windows container on a node. I had imagePullPolicy set to Never and a locally existing image present on the node. The image also wasn't tagged with latest, so the comment from Timo Reimann wasn't relevant.
Also, on the node machine, the image showed up when using nerdctl image. However they didn't show up in crictl images.
Thanks to a comment on Github, I found out that the actual problem is a different namespace of ContainerD.
As shown by the following two commands, images are not automatically build in the correct namespace:
ctr -n default images ls # shows the application images (wrong namespace)
ctr -n k8s.io images ls # shows the base images
To solve the problem, export and reimport the images to the correct namespace k8s.io by using the following command:
ctr --namespace k8s.io image import exported-app-image.tar
I was facing similar issue .Image was present in local but k8s was not able to pick it up.
So I went to terminal ,deleted the old image and ran eval $(minikube -p minikube docker-env) command.
Rebuilt the image and the redeployed the deployment yaml ,and it worked