MongoDB and Mongo-express run on minikube by docker driver. There are mongoDB config, mongoBD secret, mongo express config and mongo configmap yaml files. Services config are written on mongoDB config and mongo express config files.
I can not open mongo express on web browser. curl refused as well.
mongo express yaml file:
piVersion: apps/v1
kind: Deployment
metadata:
name: mongoex-deployment
labels:
app: mongoex
spec:
replicas: 1
selector:
matchLabels:
app: mongoex
template:
metadata:
labels:
app: mongoex
spec:
containers:
- name: mongoex
image: mongo-express
ports:
- containerPort: 8081
env:
- name: ME_CONFIG_MOGNODB_ADMINUSERNAME
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongo-root-username
- name: ME_CONFIG_MONGODB_ADMINPASSWORD
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongo-root-password
- name: ME_CONFIG_MONGODB_SERVER
valueFrom:
configMapKeyRef:
name: mongo-configmap
key: database_url
---
apiVersion: v1
kind: Service
metadata:
name: mongoex-service
spec:
selector:
app: mongoex
type: LoadBalancer
ports:
- protocol: TCP
port: 8081
targetPort: 8081
When I run kubectl service [mongo-express service]
arallels#parallels-Parallels-Virtual-Platform:~/minikube-projects/mongo-project$ minikube service
j .mongo-configmap.yaml.swp mongodb-secret.yaml mongoex-deployment.yaml
mongo-configmap.yaml mongodb-deployment.yaml .mongodb-.swp
parallels#parallels-Parallels-Virtual-Platform:~/minikube-projects/mongo-project$ minikube service
❌ Exiting due to MK_USAGE: You must specify service name(s) or --all
parallels#parallels-Parallels-Virtual-Platform:~/minikube-projects/mongo-project$ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 443/TCP 25h
mongodb-service ClusterIP 10.102.183.17 27017/TCP 60m
mongoex-service LoadBalancer 10.106.109.43 8081:32367/TCP 14m
parallels#parallels-Parallels-Virtual-Platform:~/minikube-projects/mongo-project$ minikube service mongoex-service
|-----------|-----------------|-------------|---------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-----------|-----------------|-------------|---------------------------|
| default | mongoex-service | 8081 | http://192.168.49.2:32367 |
|-----------|-----------------|-------------|---------------------------|
🎉 Opening service default/mongoex-service in default browser...
parallels#parallels-Parallels-Virtual-Platform:~/minikube-projects/mongo-project$ curl http://192.168.49.2:32367
curl: (7) Failed to connect to 192.168.49.2 port 32367: Connection refused
parallels#parallels-Parallels-Virtual-Platform:~/minikube-projects/mongo-project$kubectl get pods
NAME READY STATUS RESTARTS AGE
mongodb-deployment-844789cd64-b9kj8 1/1 Running 0 61m
mongoex-deployment-6966646b5f-9dz4c 1/1 Running 0 15m
parallels#parallels-Parallels-Virtual-Platform:~/minikube-projects/mongo-project$ kubectl logs mongoex-deployment-6966646b5f-9dz4c
Welcome to mongo-express
(node:8) [MONGODB DRIVER] Warning: Current Server Discovery and Monitoring engine is deprecated, and will be removed in a future version. To use the new Server Discover and Monitoring engine, pass option { useUnifiedTopology: true } to the MongoClient constructor.
(node:8) UnhandledPromiseRejectionWarning: MongoError: command listDatabases requires authentication
at Connection. (/node_modules/mongodb/lib/core/connection/pool.js:453:61)
at Connection.emit (events.js:314:20)
at processMessage (/node_modules/mongodb/lib/core/connection/connection.js:456:10)
at Socket. (/node_modules/mongodb/lib/core/connection/connection.js:625:15)
at Socket.emit (events.js:314:20)
at addChunk (_stream_readable.js:297:12)
at readableAddChunk (_stream_readable.js:272:9)
at Socket.Readable.push (_stream_readable.js:213:10)
at TCP.onStreamRead (internal/stream_base_commons.js:188:23)
(node:8) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag --unhandled-rejections=strict (see https://nodejs.org/api/cli.html#cli_unhandled_rejections_mode). (rejection id: 1)
(node:8) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.
parallels#parallels-Parallels-Virtual-Platform:~/minikube-projects/mongo-project$ minikube service list
|-------------|-----------------|--------------|---------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-------------|-----------------|--------------|---------------------------|
| default | kubernetes | No node port |
| default | mongodb-service | No node port |
| default | mongoex-service | 8081 | http://192.168.49.2:32367 |
| kube-system | kube-dns | No node port |
|-------------|-----------------|--------------|---------------------------|
Restart all pods, Minikube, Linux. Checked all info in yaml files.
Thank all who tried to help me with this issue.
Ive spent plenty of time fixing this yesterday.
The latest version of Mongo-express does not work well in my environment. So I pulled image mongo-express:0.49.0. And this issue got resolved. So probably it was a software problem. But if you have some comments to write I would like to read and understand this issue deeply.
Related
Related:
Kubernetes service external ip pending
Kubernetes (Minikube) external ip does not work
Initial state:
$ kubectl.exe get service -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 22h <none>
mongo-express-service LoadBalancer 10.102.123.226 <pending> 8081:30000/TCP 14m app=mongo-express
mongodb-service ClusterIP 10.104.217.138 <none> 27017/TCP 29m app=mongodb
after patching with external IP:
$ kubectl patch svc mongo-express-service -p '{"spec": {"type": "LoadBalancer", "externalIPs":["172.31.71.218"]}}'
service/mongo-express-service patched
the service gets an external IP:
$ kubectl.exe get service -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 22h <none>
mongo-express-service LoadBalancer 10.102.123.226 172.31.71.218 8081:30000/TCP 14m app=mongo-express
mongodb-service ClusterIP 10.104.217.138 <none> 27017/TCP 29m app=mongodb
however it's not reachable:
$ wget 172.31.71.218:30000
--2022-05-05 00:23:11-- http://172.31.71.218:30000/
Connecting to 172.31.71.218:30000... failed: Connection timed out.
Retrying.
--2022-05-05 00:23:33-- (try: 2) http://172.31.71.218:30000/
Connecting to 172.31.71.218:30000...
The service looks alright:
$ kubectl describe svc mongo-express-service
Name: mongo-express-service
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=mongo-express
Type: LoadBalancer
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.102.123.226
IPs: 10.102.123.226
External IPs: 172.31.71.218
Port: <unset> 8081/TCP
TargetPort: 8081/TCP
NodePort: <unset> 30000/TCP
Endpoints: 172.17.0.4:8081
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ExternalIP 34m service-controller Count: 0 -> 1
Launching the service with minikube:
$ minikube.exe service mongo-express-service
|-----------|-----------------------|-------------|---------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-----------|-----------------------|-------------|---------------------------|
| default | mongo-express-service | 8081 | http://192.168.49.2:30000 |
|-----------|-----------------------|-------------|---------------------------|
* Starting tunnel for service mongo-express-service.
|-----------|-----------------------|-------------|-----------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-----------|-----------------------|-------------|-----------------------|
| default | mongo-express-service | | http://127.0.0.1:1298 |
|-----------|-----------------------|-------------|-----------------------|
* Opening service default/mongo-express-service in default browser...
! Because you are using a Docker driver on windows, the terminal needs to be open to run it.
* Stopping tunnel for service mongo-express-service.
works for url http://127.0.0.1:1298 but not for the external ip.
minikube tunnel also fails:
$ minikube tunnel
* Tunnel successfully started
* NOTE: Please do not close this terminal as this process must stay alive for the tunnel to be accessible ...
* Starting tunnel for service mongo-express-service.
When started, only the internal address is reachable:
It was reachable even before tunnel was started.
Setup: win 10, minikube started with docker image (minikube start --image=docker)
Is it possible to expose the internal address on windows?
mongo-express.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongo-express
labels:
app: mongo-express
spec:
replicas: 1
selector:
matchLabels:
app: mongo-express
template:
metadata:
labels:
app: mongo-express
spec:
containers:
- name: mongo-express
image: mongo-express
ports:
- containerPort: 8081
env:
- name: ME_CONFIG_MONGODB_ADMINUSERNAME
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongo-root-username
- name: ME_CONFIG_MONGODB_ADMINPASSWORD
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongo-root-password
- name: ME_CONFIG_MONGODB_SERVER
valueFrom:
configMapKeyRef:
name: mongodb-configmap
key: database_url
---
apiVersion: v1
kind: Service
metadata:
name: mongo-express-service
spec:
selector:
app: mongo-express
type: LoadBalancer
ports:
- protocol: TCP
port: 8081
targetPort: 8081
nodePort: 30000
mongo.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongodb-deployment
labels:
app: mongodb
spec:
replicas: 1
selector:
matchLabels:
app: mongodb
template:
metadata:
labels:
app: mongodb
spec:
containers:
- name: mongodb
image: mongo
ports:
- containerPort: 27017
env:
- name: MONGO_INITDB_ROOT_USERNAME
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongo-root-username
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongo-root-password
---
apiVersion: v1
kind: Service
metadata:
name: mongodb-service
spec:
selector:
app: mongodb
ports:
- protocol: TCP
port: 27017
targetPort: 27017
Looking at:
https://github.com/kubernetes/minikube/issues/7344#issuecomment-607318525
the docker network does not seem to work on win.
List of wins for which Hyper-V cannot be enabled:
Operating System Requirements
The Hyper-V role can be enabled on these versions of Windows 10:
Windows 10 Enterprise
Windows 10 Professional
Windows 10 Education
The Hyper-V role cannot be installed on:
Windows 10 Home
Windows 10 Mobile
Windows 10 Mobile Enterprise
List all of the features available in the operating system:
DISM /Online /Get-Feature
See what is the name listed there.
Download and install Windows 10 Client Hyper-V
Hyper-V on Windows 10 - Document links:
https://msdn.microsoft.com/en-us/virtualization/hyperv_on_windows/windows_welcome
Requirements:
https://msdn.microsoft.com/virtualization/hyperv_on_windows/quick_start/walkthrough_compatibility
taken from:
https://social.technet.microsoft.com/Forums/ie/en-US/c3d8faaa-2e5a-4cfb-a681-9dfdf8bc5310/cant-install-hyperv-on-windows-10-version-1001024016384-feature-name-microsofthyperv-is?forum=win10itprovirt
Eventually had to use the docker vm. After reinstalling the latest version of virtual box and minicube, the existing host could not be loaded; already mentioned here:
https://github.com/kubernetes/minikube/issues/9130
win version:
systeminfo /fo csv | ConvertFrom-Csv | select OS*, System*, Hotfix* | Format-List
OS Name : Microsoft Windows 10 Home
OS Version : 10.0.19044 N/A Build 19044
OS Manufacturer : Microsoft Corporation
OS Configuration : Standalone Workstation
OS Build Type : Multiprocessor Free
System Boot Time : 05/05/2022, 21:27:10
System Manufacturer : --
System Model : --
System Type : x64-based PC
System Directory : C:\WINDOWS\system32
System Locale : en-us;English (United States)
Hotfix(s) : 13 Hotfix(s) Installed.,[01]: KB5012117,[02]: KB4562830,[03]: KB4577586,[04]: KB4580325,[05]:
KB4598481,[06]: KB5000736,[07]: KB5003791,[08]: KB5012599,[09]: KB5006753,[10]: KB5007273,[11]:
KB5011352,[12]: KB5011651,[13]: KB5005699
I have a docker container which runs a basic front end angular app. I have verified it runs with no issues and I can successfully access the web app in the browser with docker run -p 5901:80 formbuilder-stand-alone-form.
I am able to successfully deploy it with minikube and kubernetes on my cloud dev server
apiVersion: v1
kind: Service
metadata:
name: stand-alone-service
spec:
selector:
app: stand-alone-form
ports:
- protocol: TCP
port: 5901
targetPort: 80
type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: stand-alone-form-app
labels:
app: stand-alone-form
spec:
replicas: 1
selector:
matchLabels:
app: stand-alone-form
template:
metadata:
labels:
app: stand-alone-form
spec:
containers:
- name: stand-alone-form-pod
image: formbuilder-stand-alone-form
imagePullPolicy: Never
ports:
- containerPort: 80
one#work ...github/stand-alone-form-builder-hhh/form-builder-hhh (main)
% kubectl get pods
NAME READY STATUS RESTARTS AGE
stand-alone-form-app-6d4669f569-vsffc 1/1 Running 0 6s
one#work ...github/stand-alone-form-builder-hhh/form-builder-hhh (main)
% kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
stand-alone-form-app 1/1 1 1 8s
one#work ...github/stand-alone-form-builder-hhh/form-builder-hhh (main)
% kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 5d7h
stand-alone-service LoadBalancer 10.96.197.197 <pending> 5901:30443/TCP 21s
However, I am not able to access it with the url:
one#work ...github/stand-alone-form-builder-hhh/form-builder-hhh
% minikube service stand-alone-service
|-----------|---------------------|-------------|---------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-----------|---------------------|-------------|---------------------------|
| default | stand-alone-service | 5901 | http://192.168.49.2:30443 |
|-----------|---------------------|-------------|---------------------------|
In this example, http://192.168.49.2:30443/ gives me a dead web page.
I disabled all my iptables for troubleshooting.
Any idea how to access the front end web app? I was thinking I might have the selectors wrong but sure.
UPDATE: Here is the requested new outputs:
one#work ...github/stand-alone-form-builder-hhh/form-builder-hhh (main)
% kubectl describe service stand-alone-service
Name: stand-alone-service
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=stand-alone-form
Type: LoadBalancer
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.96.197.197
IPs: 10.96.197.197
LoadBalancer Ingress: 10.96.197.197
Port: <unset> 5901/TCP
TargetPort: 80/TCP
NodePort: <unset> 30443/TCP
Endpoints: 172.17.0.2:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
one#work ...github/stand-alone-form-builder-hhh/form-builder-hhh (main)
% minikube tunnel
Password:
Status:
machine: minikube
pid: 237498
route: 10.96.0.0/12 -> 192.168.49.2
minikube: Running
services: [stand-alone-service]
errors:
minikube: no errors
router: no errors
loadbalancer emulator: no errors
Note: I noticed with the tunnel I do have a external IP for the loadbalancer now:
one#work ...github/stand-alone-form-builder-hhh/form-builder-hhh (main)
% kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 5d11h
stand-alone-service LoadBalancer 10.98.162.179 10.98.162.179 5901:31596/TCP 3m10s
It looks like your LoadBalancer hasn't quite resolved correctly, as the External-IP is still marked as <pending>
According to Minikube, this happens when the tunnel is missing:
https://minikube.sigs.k8s.io/docs/handbook/accessing/#check-external-ip
Have you tried running minikube tunnel in a separate command window?
https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-tunnel
https://minikube.sigs.k8s.io/docs/commands/tunnel/
I am following a tutorial to access a pod running inside a Kubernetes cluster behind a service. This Kubernetes cluster is running on Windows 10 using Desktop Docker (by enabling the Kubernetes option)
I am unable to access it using this https://local.ticket.dev/api/users/currentuser it always says "Site can't be reached: local.ticket.dev unexpectedly closed the connection."
I have disabled the redirect but it still redirects HTTP to HTTPs
Request URL: http://local.ticket.dev/api/users/currentuser
Request Method: GET
Status Code: 307 Internal Redirect
Referrer Policy: strict-origin-when-cross-origin
Location: https://local.ticket.dev/api/users/currentuser
Non-Authoritative-Reason: HSTS
Here is visually what I want
kubectl get ing
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress-service <none> local.ticket.dev 80 29s
kubectl get services
Please note it's running on local machine windows 10 with Docker Desktop. and the LoadBalancer external IP always remain pending even after 6 hours
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
auth-srv ClusterIP 10.96.254.94 <none> 3000/TCP 45s
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 5h17m
nginx-ingress-1629401528-controller LoadBalancer 10.110.199.210 <pending> 80:31430/TCP,443:32346/TCP 5h13m
nginx-ingress-1629401528-default-backend ClusterIP 10.108.79.252 <none> 80/TCP 5h13m
kubectl get pods
NAME READY STATUS RESTARTS AGE
auth-depl-c98cdf66f-txqxt 1/1 Running 0 54s
nginx-ingress-1629401528-controller-569576ddbd-2htxz 1/1 Running 0 5h13m
nginx-ingress-1629401528-default-backend-69c7fc6549-xxf8w 1/1 Running 0 5h13m
How I configured it is as follows
1 - Installation of Nginx by the following command
helm install stable/nginx-ingress --generate-name
2 - Skaffold dev
Listing files to watch...
- billo/ticket_auth
Generating tags...
- billo/ticket_auth -> billo/ticket_auth:latest
Some taggers failed. Rerun with -vdebug for errors.
Checking cache...
- billo/ticket_auth: Found Locally
Starting test...
Tags used in deployment:
- billo/ticket_auth -> billo/ticket_auth:d869228....
Starting deploy...
- deployment.apps/auth-depl created
- service/auth-srv created
- ingress.networking.k8s.io/ingress-service created
Waiting for deployments to stabilize...
- deployment/auth-depl is ready.
Deployments stabilized in 2.302 seconds
Waiting for deployments to stabilize...
Deployments stabilized in 6.9904ms
Press Ctrl+C to exit
Watching for changes...
[auth]
[auth] > auth#1.0.0 start
[auth] > ts-node-dev --poll src/index.ts
[auth]
[auth] [INFO] 00:59:23 ts-node-dev ver. 1.1.8 (using ts-node ver. 9.1.1, typescript ver. 4.3.5)
[auth] Auth!!!! listen to 3000 port
if I look at the last line it seems that my Auth Pod is running on 3000 port.
auth-depl.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-depl
spec:
replicas: 1
selector:
matchLabels:
app: auth
template:
metadata:
labels:
app: auth
spec:
containers:
- name: auth
image: billo/ticket_auth
imagePullPolicy: Never
---
apiVersion: v1
kind: Service
metadata:
name: auth-srv
spec:
selector:
app: auth
ports:
- name: auth
protocol: TCP
port: 3000
targetPort: 3000
ingress-srv.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- host: local.ticket.dev
http:
paths:
- path: /api/users/?(.*)
pathType: Prefix
backend:
service:
name: auth-srv
port:
number: 3000
Configuration in the Host file
# Added by Docker Desktop
127.0.0.1 host.docker.internal
127.0.0.1 gateway.docker.internal
# To allow the same kube context to work on the host and the container:
127.0.0.1 kubernetes.docker.internal
# End of section
127.0.0.1 ingress.local
127.0.0.1 local.ticket.dev
First Disable the HTTPS redirect first
nginx.ingress.kubernetes.io/ssl-redirect: "false"
add annotation into the ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- host: local.ticket.dev
http:
paths:
- path: /api/users/?(.*)
pathType: Prefix
backend:
service:
name: auth-srv
port:
number: 3000
Did you get the external IP for Nginx controller svc? it's showing pending as you are on the Local system.
You might also need to add entries into host file
manually adding your ingresses' hostnames to /etc/hosts:
127.0.0.1 ingress.local
127.0.0.1 local.ticket.dev
OR
Host IP local.ticket.dev
I have the following .yaml file to install redisinsights in kubernetes, with persistence support.
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: redisinsight-storage-class
provisioner: 'kubernetes.io/gce-pd'
parameters:
type: 'pd-standard'
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: redisinsight-volume-claim
spec:
storageClassName: redisinsight-storage-class
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: redisinsight #deployment name
labels:
app: redisinsight #deployment label
spec:
replicas: 1 #a single replica pod
selector:
matchLabels:
app: redisinsight #which pods is the deployment managing, as defined by the pod template
template: #pod template
metadata:
labels:
app: redisinsight #label for pod/s
spec:
initContainers:
- name: change-data-dir-ownership
image: alpine:3.6
command:
- chmod
- -R
- '777'
- /db
volumeMounts:
- name: redisinsight
mountPath: /db
containers:
- name: redisinsight #Container name (DNS_LABEL, unique)
image: redislabs/redisinsight:1.6.1 #repo/image
imagePullPolicy: Always #Always pull image
volumeMounts:
- name: redisinsight #Pod volumes to mount into the container's filesystem. Cannot be updated.
mountPath: /db
ports:
- containerPort: 8001 #exposed conainer port and protocol
protocol: TCP
volumes:
- name: redisinsight
persistentVolumeClaim:
claimName: redisinsight-volume-claim
---
apiVersion: v1
kind: Service
metadata:
name: redisinsight
spec:
ports:
- port: 8001
name: redisinsight
type: LoadBalancer
selector:
app: redisinsight
However, it fails to launch and gives an error:
INFO 2020-07-03 06:30:08,117 redisinsight_startup Registered SIGTERM handler
ERROR 2020-07-03 06:30:08,131 redisinsight_startup Error in main()
Traceback (most recent call last):
File "./startup.py", line 477, in main
ValueError: invalid literal for int() with base 10: 'tcp://10.69.9.111:8001'
Traceback (most recent call last):
File "./startup.py", line 495, in <module>
File "./startup.py", line 477, in main
ValueError: invalid literal for int() with base 10: 'tcp://10.69.9.111:8001'
But the same docker image, when run locally via docker as:
docker run -v redisinsight:/db -p 8001:8001 redislabs/redisinsight
works fine. What am I doing wrong ?
It feels like redisinsights is trying to read port as an int but somehow gets a string and is confused. But I cannot understand how this works fine the local docker run.
UPDATE:
RedisInsight's kubernetes documentation has been updated recently. It clearly describes how to create a RedisInsight k8s deployment with and without a service.
IT also explains what to do when there's a service named "redisinsight" already:
Note - If the deployment will be exposed by a service whose name is ‘redisinsight’, set REDISINSIGHT_HOST and REDISINSIGHT_PORT environment variables to override the environment variables created by the service.
The problem is with the name of the service.
From the documentation, it is mentioned that RedisInsight has an environment variable REDISINSIGHT_PORT which can configure the port in which RedisInsight can run.
When you create a service in Kubernetes, all the pods that match the service, gets an environment variable <SERVICE_NAME>_PORT=<SERVICE_IP>:<SERVICE_PORT>.
So when you try to create the above mentioned service with name redisinsight, Kubernetes passes the service environment variable REDISINSIGHT_PORT=<SERVICE_IP>:SERVICE_PORT. But the port environment variable (REDISINSIGHT_PORT) is documented to be a port number and not an endpoint which makes the pod to crash when redisinsight running on the pod tries to use the environment variable as the port number.
So change the name of the service to be something different and not redisinsight and it should work.
Here's a quick deployment and service file:
Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: redisinsight #deployment name
labels:
app: redisinsight #deployment label
spec:
replicas: 1 #a single replica pod
selector:
matchLabels:
app: redisinsight #which pods is the deployment managing, as defined by the pod template
template: #pod template
metadata:
labels:
app: redisinsight #label for pod/s
spec:
containers:
- name: redisinsight #Container name (DNS_LABEL, unique)
image: redislabs/redisinsight:1.6.3 #repo/image
imagePullPolicy: IfNotPresent
volumeMounts:
- name: db #Pod volumes to mount into the container's filesystem. Cannot be updated.
mountPath: /db
ports:
- containerPort: 8001 #exposed conainer port and protocol
protocol: TCP
volumes:
- name: db
emptyDir: {} # node-ephemeral volume https://kubernetes.io/docs/concepts/storage/volumes/#emptydir
Service:
apiVersion: v1
kind: Service
metadata:
name: redisinsight-http # name should not be redisinsight
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 8001
selector:
app: redisinsight
Please note the name of the service.
Logs of redisinsight pod:
INFO 2020-09-02 11:46:20,689 redisinsight_startup Registered SIGTERM handler
INFO 2020-09-02 11:46:20,689 redisinsight_startup Starting webserver...
INFO 2020-09-02 11:46:20,689 redisinsight_startup Visit http://0.0.0.0:8001 in your web browser. Press CTRL-C to exit.
Also the service end point (from minikube):
$ minikube service list
|----------------------|------------------------------------|--------------|-------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|----------------------|------------------------------------|--------------|-------------------------|
| default | kubernetes | No node port |
| default | redisinsight-http | 80 | http://172.17.0.2:30860 |
| kube-system | ingress-nginx-controller-admission | No node port |
| kube-system | kube-dns | No node port |
| kubernetes-dashboard | dashboard-metrics-scraper | No node port |
| kubernetes-dashboard | kubernetes-dashboard | No node port |
|----------------------|------------------------------------|--------------|-------------------------|
BTW, If you don't want to create a service at all (which is not related the question), you can do port forwarding:
kubectl port-forward <redisinsight-pod-name> 8001:8001
Problem is related to service, as it's interfering with the pod causing it to crash.
As we can read in the Redis docs Installing RedisInsight on Kubernetes
Once the deployment has been successfully applied and the deployment complete, access RedisInsight. This can be accomplished by exposing the deployment as a K8s Service or by using port forwarding, as in the example below:
kubectl port-forward deployment/redisinsight 8001
Open your browser and point to http://localhost:8001
Or a service which in your case while using GCP can look like this:
apiVersion: v1
kind: Service
metadata:
name: redisinsight
spec:
ports:
- protocol: TCP
port: 8001
targetPort: 8001
name: redisinsight
type: LoadBalancer
selector:
app: redisinsight
Once the service receives the External-IP you can use it to access Redis.
crou#cloudshell:~ $ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.8.0.1 <none> 443/TCP 9d
redisinsight LoadBalancer 10.8.7.0 34.67.171.112 8001:31456/TCP 92s
via http://34.67.171.112:8001/ in my example.
It happens to me too. In case anyone miss the conversation in the comments, here is the solution.
Deploy the redisinsight pod first and wait until it run successfully.
Deploy the service.
I think this is a bug and it is not really working because a pod can die anytime. It is kinda against the reason of using Kubernetes.
Someone have reported this issue here https://forum.redislabs.com/t/redisinsight-fails-to-launch-in-kubernetes/652/2
There are several problems with redisinsight running in k8s as suggested by the current documentation. I will list them below:
Suggestion is to use emptyDir
Issue: Emptydir will most likely run out of space for larger redis clusters
Solution: Use persistent volume
redisinsight docker container uses a redisinsight use
Issue: redisinsight users does not ties to a specific uid. For this reason the persistent volume permissions cannot be set in a way that allows access to a pvc
Solution: use cryptexlabs/redisinsight:latest which extends redislabs/redisinsight:latest but set uid for redisinsight to 777
default permissions do not allow access for redisinsight
Issue: redisinsight will not be able to access the /db directory
Solution: Use init container to set the directory permissions so that user 777 owns the /db directory
Suggestion is to use a nodeport for service
Issue: this is a security hole
Solution: Use ClusterIP instead and then use kubectl portforwarding to gain access or other secure access to redisinsight
Accessing rdb files locally is impractical.
Problem: rdb files for large clusters must be downloaded and uploaded via the kubectl
Solution: Use the s3 solution. If you are using kube2iam in an EKS cluster you'll need to create a special role that has access the bucket. Before that you must create a backup of your cluster and then export the backup following these instructions: https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/backups-exporting.html
Summary
Redisinsight is a good tool. But currently running it insight kubernetes cluster is an absolute nightmare and I t
I have this weird error plaguing me.
I am trying to get an activemq pod running with a kubernetes stateful set, volume attached.
The activemq is just a plain old vanila docker image, picked it from here https://hub.docker.com/r/rmohr/activemq/
INFO | Refreshing org.apache.activemq.xbean.XBeanBrokerFactory$1#3fee9989: startup date [Thu Aug 23 22:12:07 GMT 2018]; root of context hierarchy
INFO | Using Persistence Adapter: KahaDBPersistenceAdapter[/opt/activemq/data/kahadb]
INFO | KahaDB is version 6
INFO | PListStore:[/opt/activemq/data/localhost/tmp_storage] started
INFO | Apache ActiveMQ 5.15.4 (localhost, ID:activemq-0-43279-1535062328969-0:1) is starting
INFO | Listening for connections at: tcp://activemq-0:61616?maximumConnections=1000&wireFormat.maxFrameSize=104857600
INFO | Connector openwire started
INFO | Listening for connections at: amqp://activemq-0:5672?maximumConnections=1000&wireFormat.maxFrameSize=104857600
INFO | Connector amqp started
INFO | Listening for connections at: stomp://activemq-0:61613?maximumConnections=1000&wireFormat.maxFrameSize=104857600
INFO | Connector stomp started
INFO | Listening for connections at: mqtt://activemq-0:1883?maximumConnections=1000&wireFormat.maxFrameSize=104857600
INFO | Connector mqtt started
WARN | ServletContext#o.e.j.s.ServletContextHandler#65a15628{/,null,STARTING} has uncovered http methods for path: /
INFO | Listening for connections at ws://activemq-0:61614?maximumConnections=1000&wireFormat.maxFrameSize=104857600
INFO | Connector ws started
INFO | Apache ActiveMQ 5.15.4 (localhost, ID:activemq-0-43279-1535062328969-0:1) started
INFO | For help or more information please see: http://activemq.apache.org
WARN | Store limit is 102400 mb (current store usage is 6 mb). The data directory: /opt/activemq/data/kahadb only has 95468 mb of usable space. - resetting to maximum available disk space: 95468 mb
WARN | Failed startup of context o.e.j.w.WebAppContext#478ee483{/admin,file:/opt/apache-activemq-5.15.4/webapps/admin/,null}
java.lang.IllegalStateException: Parent for temp dir not configured correctly: writeable=false
at org.eclipse.jetty.webapp.WebInfConfiguration.makeTempDirectory(WebInfConfiguration.java:336)[jetty-all-9.2.22.v20170606.jar:9.2.22.v20170606]
at org.eclipse.jetty.webapp.WebInfConfiguration.resolveTempDirectory(WebInfConfiguration.java:304)[jetty-all-9.2.22.v20170606.jar:9.2.22.v20170606]
at org.eclipse.jetty.webapp.WebInfConfiguration.preConfigure(WebInfConfiguration.java:69)[jetty-all-9.2.22.v20170606.jar:9.2.22.v20170606]
at org.eclipse.jetty.webapp.WebAppContext.preConfigure(WebAppContext.java:468)[jetty-all-9.2.22.v20170606.jar:9.2.22.v20170606]
at org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:504)[jetty-all-9.2.22.v20170606.jar:9.2.22.v20170606]
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)[jetty-all-9.2.22.v20170606.jar:9.2.22.v20170606]
at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:132)[jetty-all-9.2.22.v20170606.jar:9.2.22.v20170606]
at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:114)[jetty-all-9.2.22.v20170606.jar:9.2.22.v20170606]
at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:61)[jetty-all-9.2.22.v20170606.jar:9.2.22.v20170606]
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)[jetty-all-9.2.22.v20170606.jar:9.2.22.v20170606]
at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:132)[jetty-all-9.2.22.v20170606.jar:9.2.22.v20170606]
at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:114)[jetty-all-9.2.22.v20170606.jar:9.2.22.v20170606]
at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:61)[jetty-all-9.2.22.v20170606.jar:9.2.22.v20170606]
at org.eclipse.jetty.security.SecurityHandler.doStart(SecurityHandler.java:391)[jetty-all-9.2.22.v20170606.jar:9.2.22.v20170606]
at org.eclipse.jetty.security.ConstraintSecurityHandler.doStart(ConstraintSecurityHandler.java:449)[jetty-all-9.2.22.v20170606.jar:9.2.22.v20170606]
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)[jetty-all-9.2.22.v20170606.jar:9.2.22.v20170606]
at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:132)[jetty-all-9.2.22.v20170606.jar:9.2.22.v20170606]
at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:114)[jetty-all-9.2.22.v20170606.jar:9.2.22.v20170606]
at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:61)[jetty-all-9.2.22.v20170606.jar:9.2.22.v20170606]
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)[jetty-all-9.2.22.v20170606.jar:9.2.22.v20170606]
at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:132)[jetty-all-9.2.22.v20170606.jar:9.2.22.v20170606]
at org.eclipse.jetty.server.Server.start(Server.java:387)[jetty-all-9.2.22.v20170606.jar:9.2.22.v20170606]
at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:114)[jetty-all-9.2.22.v20170606.jar:9.2.22.v20170606]
at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:61)[jetty-all-9.2.22.v20170606.jar:9.2.22.v20170606]
at org.eclipse.jetty.server.Server.doStart(Server.java:354)[jetty-all-9.2.22.v20170606.jar:9.2.22.v20170606]
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)[jetty-all-9.2.22.v20170606.jar:9.2.22.v20170606]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)[:1.8.0_171]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)[:1.8.0_171]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)[:1.8.0_171]
at java.lang.reflect.Method.invoke(Method.java:498)[:1.8.0_171]
at org.springframework.util.MethodInvoker.invoke(MethodInvoker.java:265)[spring-core-4.3.17.RELEASE.jar:4.3.17.RELEASE]
at org.springframework.beans.factory.config.MethodInvokingBean.invokeWithTargetException(MethodInvokingBean.java:119)[spring-beans-4.3.17.RELEASE.jar:4.3.17.RELEASE]
at org.springframework.beans.factory.config.MethodInvokingFactoryBean.afterPropertiesSet(MethodInvokingFactoryBean.java:106)[spring-beans-4.3.17.RELEASE.jar:4.3.17.RELEASE]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1692)[spring-beans-4.3.17.RELEASE.jar:4.3.17.RELEASE]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1630)[spring-beans-4.3.17.RELEASE.jar:4.3.17.RELEASE]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:555)[spring-beans-4.3.17.RELEASE.jar:4.3.17.RELEASE]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:483)[spring-beans-4.3.17.RELEASE.jar:4.3.17.RELEASE]
at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:312)[spring-beans-4.3.17.RELEASE.jar:4.3.17.RELEASE]
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:230)[spring-beans-4.3.17.RELEASE.jar:4.3.17.RELEASE]
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:308)[spring-beans-4.3.17.RELEASE.jar:4.3.17.RELEASE]
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:197)[spring-beans-4.3.17.RELEASE.jar:4.3.17.RELEASE]
at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:742)[spring-beans-4.3.17.RELEASE.jar:4.3.17.RELEASE]
at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:867)[spring-context-4.3.17.RELEASE.jar:4.3.17.RELEASE]
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:543)[spring-context-4.3.17.RELEASE.jar:4.3.17.RELEASE]
at org.apache.xbean.spring.context.ResourceXmlApplicationContext.<init>(ResourceXmlApplicationContext.java:64)[xbean-spring-4.2.jar:4.2]
at org.apache.xbean.spring.context.ResourceXmlApplicationContext.<init>(ResourceXmlApplicationContext.java:52)[xbean-spring-4.2.jar:4.2]
at org.apache.activemq.xbean.XBeanBrokerFactory$1.<init>(XBeanBrokerFactory.java:104)[activemq-spring-5.15.4.jar:5.15.4]
at org.apache.activemq.xbean.XBeanBrokerFactory.createApplicationContext(XBeanBrokerFactory.java:104)[activemq-spring-5.15.4.jar:5.15.4]
at org.apache.activemq.xbean.XBeanBrokerFactory.createBroker(XBeanBrokerFactory.java:67)[activemq-spring-5.15.4.jar:5.15.4]
at org.apache.activemq.broker.BrokerFactory.createBroker(BrokerFactory.java:71)[activemq-broker-5.15.4.jar:5.15.4]
at org.apache.activemq.broker.BrokerFactory.createBroker(BrokerFactory.java:54)[activemq-broker-5.15.4.jar:5.15.4]
at org.apache.activemq.console.command.StartCommand.runTask(StartCommand.java:87)[activemq-console-5.15.4.jar:5.15.4]
at org.apache.activemq.console.command.AbstractCommand.execute(AbstractCommand.java:63)[activemq-console-5.15.4.jar:5.15.4]
at org.apache.activemq.console.command.ShellCommand.runTask(ShellCommand.java:154)[activemq-console-5.15.4.jar:5.15.4]
at org.apache.activemq.console.command.AbstractCommand.execute(AbstractCommand.java:63)[activemq-console-5.15.4.jar:5.15.4]
at org.apache.activemq.console.command.ShellCommand.main(ShellCommand.java:104)[activemq-console-5.15.4.jar:5.15.4]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)[:1.8.0_171]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)[:1.8.0_171]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)[:1.8.0_171]
at java.lang.reflect.Method.invoke(Method.java:498)[:1.8.0_171]
at org.apache.activemq.console.Main.runTaskClass(Main.java:262)[activemq.jar:5.15.4]
at org.apache.activemq.console.Main.main(Main.java:115)[activemq.jar:5.15.4]
The kubernete activemq pod is running fine if we don't define it with stateful sets.
Below is the spec
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: activemq
namespace: dev
labels:
app: activemq
spec:
replicas: 1
serviceName: activemq-svc
selector:
matchLabels:
app: activemq
template:
metadata:
labels:
app: activemq
spec:
securityContext:
runAsUser: 1000
fsGroup: 2000
runAsNonRoot: false
containers:
- name: activemq
image: "mydocker/amq:latest"
imagePullPolicy: "Always"
ports:
- containerPort: 61616
name: port-61616
- containerPort: 8161
name: port-8161
volumeMounts:
- name: activemq-data
mountPath: "/opt/activemq/data"
restartPolicy: Always
imagePullSecrets:
- name: regsecret
tolerations:
- effect: NoExecute
key: appstype
operator: Equal
value: ibd-mq
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: appstype
operator: In
values:
- dev-mq
volumeClaimTemplates:
- metadata:
name: activemq-data
spec:
accessModes:
- ReadWriteOnce
storageClassName: "gp2-us-east-2a"
resources:
requests:
storage: 100Gi
WARN | Failed startup of context o.e.j.w.WebAppContext#478ee483{/admin,file:/opt/apache-activemq-5.15.4/webapps/admin/,null}
java.lang.IllegalStateException: Parent for temp dir not configured correctly: writeable=false
Unless you altered the activemq userid in your image, then that filesystem permission issue is caused by this stanza in your PodSpec:
spec:
securityContext:
runAsUser: 1000
fsGroup: 2000
runAsNonRoot: false
failing to match up with the userid configuration in rmohr/activemq:5.15.4:
$ docker run -it --entrypoint=/bin/bash rmohr/activemq:5.15.4 -c 'id -a'
uid=999(activemq) gid=999(activemq) groups=999(activemq)