I have stacked in this phase:
Have local docker insecure registry and some images in it, e.g. 192.168.1.161:5000/kafka:latest
Have kubernetes cloud cluster, for which I can access only via ~/.kube/config file, e,g. token.
Need to deploy below deployment, but kubernetes cannot pull images, error message:
Failed to pull image "192.168.1.161:5000/kafka:latest": rpc error:
code = Unknown desc = Error response from daemon: Get
https://192.168.1.161:5000/v2/: http: server gave HTTP response to
HTTPS client
apiVersion: v1
kind: Service
metadata:
name: kafka
labels:
app: kafka
spec:
type: NodePort
ports:
- name: port9094
port: 9094
targetPort: 9094
selector:
app: kafka
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: kafka
spec:
replicas: 1
template:
metadata:
labels:
app: kafka
spec:
hostname: kafka
containers:
- name: redis
image: 192.168.1.161:5000/kafka:latest
imagePullPolicy: Always
ports:
- name: port9094
containerPort: 9094
- envFrom:
- configMapRef:
name: env
imagePullSecrets:
- name: regsec
ON Kubernetes cluster I have created secret file "regsec" with this command:
kubectl create secret docker-registry regsec --docker-server=192.168.1.161 --docker-username=<name from config file> --docker-password=<token value from config file>
cat ~/.docker/config.json
{
"auths": {},
"HttpHeaders": {
"User-Agent": "Docker-Client/18.06.0-ce (linux)"
}
cat /etc/docker/daemon.json
{
"insecure-registries":["192.168.1.161:5000"]
}
kubectl version
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.2", GitCommit:"bb9ffb1654d4a729bb4cec18ff088eacc153c239", GitTreeState:"clean", BuildDate:"2018-08-07T23:17:28Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-21T09:05:37Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
docker version
Client:
Version: 18.06.0-ce
API version: 1.38
Go version: go1.10.3
Git commit: 0ffa825
Built: Wed Jul 18 19:09:54 2018
OS/Arch: linux/amd64
Experimental: false
Server:
Engine:
Version: 18.06.0-ce
API version: 1.38 (minimum version 1.12)
Go version: go1.10.3
Git commit: 0ffa825
Built: Wed Jul 18 19:07:56 2018
OS/Arch: linux/amd64
Experimental: false
You need to go to each of your nodes, edit the file /etc/default/docker.json and add the following in it:
{
"insecure-registries": ["192.168.1.161:5000"]
}
I used minikube for my Kubernetes cluster.
When I tried to apply a Pod with an image from my private docker registry (that is local, without authentication), the Pod didn't run and describe had a message indicating the repository wasn't reached (paraphrasing).
To fix this, I had to configure insecure-registry for the Docker daemon. According to the Docker docs, this can be done in two ways: as a flag passed to the dockerd command, or by modifying /etc/docker/daemon.json (on Linux).
However, as I used minikube to create and configure the cluster and daemon, I instead followed the minikube docs to set the flag --insecure-registry. The complete command is:
minikube start --insecure-registry "DOMAIN_DOCKER_REGISTRY:PORT_DOCKER_REGISTRY"
I have come to this thread over and over again trying to find the correct answer to get rid of certificates issues, without much success.
I finally solved the problem by installing the self signed certificate root on the system for all the kubernetes machines. That finally fixes the issue. On Ubuntu, you can import via:
sudo mv internal-ca.cert /usr/local/share/ca-certificates/internal-ca.crt
sudo update-ca-certificates
Keep in mind that if you have a certificate chain, it will require the root certificate, not the immediate certficate. You can check if the import worked by running:
openssl s_client -connect <YOUR REGISTRY HERE> -showcerts < /dev/null
You should see something like:
CONNECTED(00000005)
as the response.
Related
I installed telepresence by using the curl command on a Kind cluster on my Mac M1 laptop. I can correctly interact with other services deployed in the cluster and I can list some interceptable services when I use telepresence list but I cannot intercept any of them.
Steps to reproduce the behavior:
I start a Nginx service with the following manifest. I have created a docker image gsantoro/my-nginx:latest which is a modified version of Nginx just to expose port 3000 instead of 80
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
labels:
app: nginx
spec:
selector:
matchLabels:
app: nginx
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: gsantoro/my-nginx:latest
imagePullPolicy: Always
ports:
- containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
name: nginx
namespace: default
spec:
selector:
app: nginx
type: ClusterIP
ports:
- protocol: TCP
port: 3000
When I run 'telepresence intercept my-nginx --port 3000`
I see the following errors
failed to clear chain TEL_INBOUND_TCP: running [/sbin/iptables -t nat -N TEL_INBOUND_TCP --wait]: exit status 3: iptables v1.8.7 (legacy): can't initialize iptables table `nat': iptables who? (do you need to insmod?)
Perhaps iptables or your kernel needs to be upgraded.
Stream closed EOF for default/my-nginx-6c4489594c-pws84 (tel-agent-init)
Output of telepresence version
❯ telepresence version
Enhanced Client: v2.9.4
Root Daemon : v2.9.4
User Daemon : v2.9.4
Traffic Manager: v2.9.4
Operating system of the workstation running telepresence commands. MacOs version v12.6.1, Apple M1 Max chipset
➜ docker version
Client:
Cloud integration: v1.0.29
Version: 20.10.21
API version: 1.41
Go version: go1.18.7
Git commit: baeda1f
Built: Tue Oct 25 18:01:18 2022
OS/Arch: darwin/arm64
Context: default
Experimental: true
Server: Docker Desktop 4.14.1 (91661)
Engine:
Version: 20.10.21
API version: 1.41 (minimum version 1.12)
Go version: go1.18.7
Git commit: 3056208
Built: Tue Oct 25 17:59:41 2022
OS/Arch: linux/arm64
Experimental: false
containerd:
Version: 1.6.9
GitCommit: 1c90a442489720eec95342e1789ee8a5e1b9536f
runc:
Version: 1.1.4
GitCommit: v1.1.4-0-g5fd4c4d
docker-init:
Version: 0.19.0
GitCommit: de40ad0
Kubernetes environment and version
➜ kubectl version
Client Version: version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.4", GitCommit:"872a965c6c6526caa949f0c6ac028ef7aff3fb78", GitTreeState:"clean", BuildDate:"2022-11-09T13:28:30Z", GoVersion:"go1.19.3", Compiler:"gc", Platform:"darwin/arm64"}
Kustomize Version: v4.5.7
Server Version: version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.3", GitCommit:"434bfd82814af038ad94d62ebe59b133fcb50506", GitTreeState:"clean", BuildDate:"2022-11-02T03:24:50Z", GoVersion:"go1.19.2", Compiler:"gc", Platform:"linux/arm64"}
Is there a way to upgrade the version of IpTable in the docker engine?
What happened:
Add "USER 999:999" in Dockerfile to add default uid and gid into container image, then start the container in Pod , its UID is 999, but its GID is 0.
In container started by Docker the ID is correct
docker run --entrypoint /bin/bash -it test
bash-5.0$ id
uid=9999 gid=9999 groups=9999
But start as Pod, the gid is 0
kubectl exec -it test /bin/bash
bash-5.0$ id
uid=9999 gid=0(root) groups=0(root)
bash-5.0$
bash-5.0$ cat /etc/passwd
root:x:0:0:root:/root:/bin/bash
bin:x:1:1:bin:/bin:/sbin/nologin
daemon:x:2:2:daemon:/sbin:/sbin/nologin
adm:x:3:4:adm:/var/adm:/sbin/nologin
lp:x:4:7:lp:/var/spool/lpd:/sbin/nologin
sync:x:5:0:sync:/sbin:/bin/sync
shutdown:x:6:0:shutdown:/sbin:/sbin/shutdown
halt:x:7:0:halt:/sbin:/sbin/halt
mail:x:8:12:mail:/var/spool/mail:/sbin/nologin
operator:x:11:0:operator:/root:/sbin/nologin
games:x:12:100:games:/usr/games:/sbin/nologin
ftp:x:14:50:FTP User:/var/ftp:/sbin/nologin
nobody:x:65534:65534:Kernel Overflow User:/:/sbin/nologin
systemd-coredump:x:200:200:systemd Core Dumper:/:/sbin/nologin
systemd-network:x:192:192:systemd Network Management:/:/sbin/nologin
systemd-resolve:x:193:193:systemd Resolver:/:/sbin/nologin
dbus:x:81:81:System message bus:/:/sbin/nologin
If Dockerfile run extra "useradd" command , then it seems the gid is ok in Pod
RUN useradd -r -u 9999 -d /dev/null -s /sbin/nologin abc
USER 9999:9999
then the ID in container of Pod is the same as set in Dockerfile
bash-5.0$ id uid=9999(abc) gid=9999(abc) groups=9999(abc)
What you expected to happen: the GID of container in Pod should also 999
How to reproduce it (as minimally and precisely as possible):
Dockerfile add "USER 999:999"
Then start the container in Pod
apiVersion: v1
kind: Pod
metadata:
name: test
spec:
containers:
- name: test
image: test
imagePullPolicy: Never
command: ["/bin/sh", "-c", "trap : TERM INT; sleep infinity & wait"]
Environment:
Kubernetes version (use kubectl version):
Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.3", GitCommit:"06ad960bfd03b39c8310aaf92d1e7c12ce618213", GitTreeState:"clean", BuildDate:"2020-02-11T18:14:22Z", GoVersion:"go1.13.6", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.3", GitCommit:"06ad960bfd03b39c8310aaf92d1e7c12ce618213", GitTreeState:"clean", BuildDate:"2020-02-11T18:07:13Z", GoVersion:"go1.13.6", Compiler:"gc", Platform:"linux/amd64"}
OS (e.g: cat /etc/os-release): Fedora release 30 (Thirty)
docker version
Client:
Version: 18.09.9
API version: 1.39
Go version: go1.11.13
Git commit: 039a7df9ba
Built: Wed Sep 4 16:52:09 2019
OS/Arch: linux/amd64
Experimental: false
Server: Docker Engine - Community
Engine:
Version: 18.09.9
API version: 1.39 (minimum version 1.12)
Go version: go1.11.13
Git commit: 039a7df
Built: Wed Sep 4 16:22:32 2019
OS/Arch: linux/amd64
Experimental: false
I realize this isn't what you asked, but since I don't know why the USER directive isn't honored, I'll point out that you have explicit influence over the UID and GID used by your Pod via the securityContext:
spec:
securityContext:
runAsUser: 999
runAsGroup: 999
containers:
- ...
I'm learning Kubernetes and I've just installed minikube on my mac.
I have a docker image that I'd like to deploy. I created a deployment yaml file which looks like this:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: sonarqube
spec:
template:
metadata:
labels:
app: sonarqube
spec:
containers:
- image: docker-sonarqube-developer:latest
args:
- -Dsonar.web.context=/
name: sonarqube
env:
- name: SONARQUBE_JDBC_USERNAME
value: sonarqube
- name: SONARQUBE_JDBC_PASSWORD
value: sonarqube
ports:
- containerPort: 9000
name: sonarqube
I am trying to deploy my docker image on minikube with the following command:
kubectl create -f deployment.yaml
But I'm getting an error and I'm not sure what's going on.
W0628 09:18:45.550812 64359 factory_object_mapping.go:423] Failed to download OpenAPI (the server could not find the requested resource), falling back to swagger
error: error validating "k8s/deployment.yaml": error validating data: the server could not find the requested resource; if you choose to ignore these errors, turn validation off with --validate=false
Minikube is running and I can access the dashboard.
❯ kubectl get nodes
NAME STATUS ROLES AGE VERSION
minikube Ready master 17h v1.15.0
The docker image is available locally
❯ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED
6fcfdad92d16 docker-sonarqube-developer "./bin/run.sh" 16 hours
Any idea what's wrong?
Thanks!
First Check the kubectl version
Check whether the Minor for both client and server version are same
$Kubectl version
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.20.2",
Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.0",
If not,then follow the below steps-
$curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/darwin/amd64/kubectl
$chmod +x ./kubectl
$sudo mv ./kubectl /usr/local/bin/kubectl
Now check the version again
$kubectl version
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.2",
Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.0",
$kubectl create -f deployment.yaml
I have seen that gcloud kubernetes is using Docker version 17.03.2-ce, build f5ec1e2.
Where as the I want to have docker version Docker version 18.09.0, build 4d60db4
The error “* Fix error “unexpected EOF” when adding an 8GB file moby/moby#37771” has been resolved in the latter version of docker.
Is there any way i can manually upgrade the version?
Thanks
In Google Kubernetes engine you should have Node OS as Ubuntu. Then you should use a DeamonSet as start-up script with the following yaml file below:
kind: DaemonSet
apiVersion: extensions/v1beta1
metadata:
name: ssd-startup-script
labels:
app: ssd-startup-script
spec:
template:
metadata:
labels:
app: ssd-startup-script
spec:
hostPID: true
containers:
- name: ssd-startup-script
image: gcr.io/google-containers/startup-script:v1
imagePullPolicy: Always
securityContext:
privileged: true
env:
- name: STARTUP_SCRIPT
value: |
#!/bin/bash
sudo curl -s https://get.docker.com/ | sh
echo Done
Then you have the Docker version should be like below:
Client:
Version: 18.09.0
API version: 1.39
Go version: go1.10.4
Git commit: 4d60db4
Built: Wed Nov 7 00:49:01 2018
OS/Arch: linux/amd64
Experimental: false
The following deployment file is working if I'm uploading it from my local machine.
kind: Deployment
apiVersion: apps/v1
metadata:
name: api
namespace: app
spec:
replicas: 2
selector:
matchLabels:
run: api
template:
metadata:
labels:
run: api
spec:
containers:
- name: api
image: gcr.io/myproject/api:1535462260754
ports:
- containerPort: 8080
readinessProbe:
httpGet:
path: /_ah/health
port: 8080
initialDelaySeconds: 10
periodSeconds: 5
The same one is on remote Compute Engine machine which running Jenkins. On this machine, with ssh I'm also able to apply this config. Under the Jenkins shell execute it's always throws
error: unable to recognize "./dist/cluster/api.deployment.yaml": no matches for kind "Deployment" in version "apps/v1"
I tried to change apiVersion to apps/v1beta1 and to extensions/v1beta1 as well.
Don't know what to try else.
Update 1
kubectl version on Compute Engine:
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.2", GitCommit:"bb9ffb1654d4a729bb4cec18ff0 88eacc153c239", GitTreeState:"clean", BuildDate:"2018-08-07T23:17:28Z", GoVersion:"go1.10.3", Compiler:"gc", Pla tform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"9+", GitVersion:"v1.9.7-gke.5", GitCommit:"9b635efce81582e1da13b3 5a7aa539c0ccb32987", GitTreeState:"clean", BuildDate:"2018-08-02T23:42:40Z", GoVersion:"go1.9.3b4", Compiler:"gc ", Platform:"linux/amd64"}
Update 2
Run inside Jenkins job shown this.
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.2", GitCommit:"bb9ffb1654d4a729bb4cec18ff088eacc153c239", GitTreeState:"clean", BuildDate:"2018-08-07T23:17:28Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
Error from server (Forbidden): <html><head><meta http-equiv='refresh' content='1;url=/securityRealm/commenceLogin?from=%2Fversion%3Ftimeout%3D32s'/><script>window.location.replace('/securityRealm/commenceLogin?from=%2Fversion%3Ftimeout%3D32s');</script></head><body style='background-color:white; color:white;'>
Authentication required
<!--
You are authenticated as: anonymous
Groups that you are in:
Permission you need to have (but didn't): hudson.model.Hudson.Read
... which is implied by: hudson.security.Permission.GenericRead
... which is implied by: hudson.model.Hudson.Administer
-->
</body></html>
Probably the kubectl version in your Jenkins server or agent is old. Try running kubectl version from the Jenkins job to check for mismatches.
Thanks to #csanchez I figured out that I was needed to get credentials under jenkins user. For that I just ran this command:
gcloud container clusters get-credentials cluster-1 --zone=my-cluster-zone --project myproject