I'm trying to run a simple node server on port 8080, but with the following config any attempt at hitting the subdomain results in a 502 Bad Gateway error. If I go the node I can see there doesn't appear to be any ports open on the container itself. So, assuming I've checked everything correctly, is there anything else I need to do in the config to open the port for the node server?
Edit: If I ssh into the pod and curl localhost on 8080 I'm able to hit the node server.
Dockerfile
FROM node:12.18.1
WORKDIR /app
COPY ["package.json", "package-lock.json*", "./"]
RUN npm install
COPY . .
RUN npm run build
EXPOSE 8080
CMD [ "node", "server.js" ]
k8s deployment
spec:
containers:
- name: test
image: test_image
ports:
- name: http
protocol: TCP
containerPort: 8080
service yaml
apiVersion: v1
kind: Service
metadata:
name: test-service
spec:
ports:
- name: http
port: 80
targetPort: 8080
protocol: TCP
- name: https
port: 443
targetPort: 8080
protocol: TCP
selector:
app: test-deployment
type: NodePort
externalTrafficPolicy: Cluster
Ingress
spec:
rules:
- host: dev.test.com
http:
paths:
- backend:
serviceName: test-service
servicePort: 80
path: /
This wound up being on the application side. The server needed to be bound to 0.0.0.0 instead of 127.0.0.1.
Related
I am working on an API that I will deploy with Kubernetes and I want to test it locally.
I created the Docker image, successfully tested it locally, and pushed it to a public Docker registry. Now I would like to deploy in a Kubernetes cluster and there are no errors being thrown, however, I am not able to make a request to the endpoint exposed by the Minikube tunnel.
Steps to reproduce:
Start Minikube container: minikube start --ports=127.0.0.1:30000:30000
Create deployment and service: kubectl apply -f fastapi.yaml
Start minikube tunnel: minikube service fastapi-server
Encountered the following error: 192.168.49.2 took too long to respond.
requirements.txt:
anyio==3.6.1
asgiref==3.5.2
click==8.1.3
colorama==0.4.4
fastapi==0.78.0
h11==0.13.0
httptools==0.4.0
idna==3.3
pydantic==1.9.1
python-dotenv==0.20.0
PyYAML==6.0
sniffio==1.2.0
starlette==0.19.1
typing_extensions==4.2.0
uvicorn==0.17.6
watchgod==0.8.2
websockets==10.3
main.py:
from fastapi import FastAPI
app = FastAPI()
#app.get("/")
async def root():
return {"status": "OK"}
Dockerfile:
FROM python:3.9
WORKDIR /
COPY . .
RUN pip install --no-cache-dir --upgrade -r ./requirements.txt
EXPOSE 8000
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"]
fastapi.yaml:
# deployment
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: fastapi-server
name: fastapi-server
spec:
replicas: 1
selector:
matchLabels:
app: fastapi-server
template:
metadata:
labels:
app: fastapi-server
spec:
containers:
- name: fastapi-server
image: smdf/fastapi-test
ports:
- containerPort: 8000
name: http
protocol: TCP
---
# service
apiVersion: v1
kind: Service
metadata:
labels:
app: fastapi-server
name: fastapi-server
spec:
type: LoadBalancer
ports:
- port: 8000
targetPort: 8000
protocol: TCP
nodePort: 30000
Your problem is that you did not set the service selector:
# service
apiVersion: v1
kind: Service
metadata:
labels:
app: fastapi-server
name: fastapi-server
spec:
selector: <------------- Missing part
app: fastapi-server <-------------
type: NodePort <------------- Set the type to NodePort
ports:
- port: 8000
targetPort: 8000
protocol: TCP
nodePort: 30000
How to check if your service is defined properly?
I checked to see if there are any endpoints, and there weren't any since you did not "attach" the service to your deployment
kubectl get endpoints -A
For more info you can read this section under my GitHub
https://github.com/nirgeier/KubernetesLabs/tree/master/Labs/05-Services
I've gone through a fair few stackoverflow posts, none are working... So here's my issue:
I've got a simple node app running on broadcast 0.0.0.0 at port 5000, it's got a simple single endpoint at /.
I've got two k8s objects, here's my Deployment object:
### pf deployment
apiVersion: apps/v1
kind: Deployment
metadata:
# Unique key of the Deployment instance
name: pf-deployment
spec:
# 3 Pods should exist at all times.
replicas: 1
selector:
matchLabels:
app: public-facing
template:
metadata:
labels:
# Apply this label to pods and default
# the Deployment label selector to this value
app: public-facing
spec:
containers:
- name: public-facing
# Run this image
image: pf:ale8k
ports:
- containerPort: 5000
Next, here is my Service object:
### pf service
apiVersion: v1
kind: Service
metadata:
name: pf-service
labels:
run: pf-service-label
spec:
type: NodePort ### may be ommited as it is a default type
selector:
name: public-facing ### should match your labels defined for your angular pods
ports:
- protocol: TCP
targetPort: 5000 ### port your app listens on
port: 5000 ### port on which you want to expose it within your cluster
Finally, a very simple dockerfile:
### generic docker file
FROM node:12
WORKDIR /usr/src/app
COPY . .
RUN npm i
EXPOSE 5000
CMD ["npm", "run", "start"]
I have my image in the minikubes local docker registry, so that's not the issue...
When I try:
curl $(minikube service pf-service --url)
I get:
curl: (7) Failed to connect to 192.168.99.101 port 31753: Connection refused
When I try:
minikube service pf-service
I get a little further output:
Most likely you need to configure your SUID sandbox correctly
I have the hello-minikube image running, this works perfectly fine. So I presume it isn't my nacl?
I'm very new to kubernetes, so apologies in advance if it's very simple.
Thanks!
Service has got selector name: public-facing but pod has got label app: public-facing. They need to be same for Endpoints of the service to be populated with pod IPs.
If you execute below command
kubectl describe svc pf-service
You will see that Endpoints has got no IPs which is the cause of connection refused error.
Change the selector in service as below to make it work.
### pf service
apiVersion: v1
kind: Service
metadata:
name: pf-service
labels:
run: pf-service-label
spec:
type: NodePort ### may be ommited as it is a default type
selector:
app: public-facing ### should match your labels defined for your angular pods
ports:
- protocol: TCP
targetPort: 5000 ### port your app listens on
port: 5000 ### port on which you want to expose it within your cluster
i'm new to kubernetes , i'm trying to learn it using minikube and i'm facing a problem with accessing apps outside the cluster. i created a deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 8080
To access it i need to expose it decoratively or imperatively. In the imperative way it works :
kubectl expose deployment nginx-deployment --port 80 --type NodePort
When i create a service declaratively i always end up with a connection refused error :
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
type : NodePort
ports:
- port : 8080
nodePort : 30018
protocol : TCP
selector:
app: nginx
curl -k http://NodeIP:NodePort returns :
curl: (7) Failed to connect to Node IP port NodePORT: Connection
refused
As #Ansil suggested, your nginx should be configured to listen on port 8080 if you want to refer to this port in your Service definition. By default it listens on port 80.
You cannot make it listen on different port like 8080 simply by specifying different containerPort in your Deployment definition as in your example:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 8080
You can easily verify it on your own by attaching to such Pod:
kubectl exec -ti <nginx-pod-name> -- /bin/bash
Once you're there, run:
ss -ntlp
And you should see on which port your nginx actually listens on.
Additionally you may:
cat /etc/nginx/conf.d/default.conf
It will also tell you on which port your nginx is configured to listen. That's all. It's really simple. You changed containerPort to 8080 but inside your container nothing actually listens on such port.
You can still expose it as a Service (no matter declaratively or imperatively) but it won't change anything as eventually it points to the wrong port on your container, on which nothing listens and you'll see message similar to this one:
curl: (7) Failed to connect to 10.1.2.3 port 30080: Connection refused
Once you create a service in minikube you can expose the service to the outside of the minikube VM (host machine) using the command
minikube service SERVICE_NAME
Refer: https://minikube.sigs.k8s.io/docs/reference/commands/service/
Background
I am testing the Kubernetes setting on Minikube. I have two simple services successfully setup and they are backed by simple docker image. Below is an example of my service configuration. I use NodePort to expose the service on port 80.
# service 1
kind: Service
apiVersion: v1
metadata:
name: service1
spec:
selector:
app: service1
ports:
- name: http
protocol: TCP
port: 80
targetPort: 8080
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: service1-deployment
labels:
app: service1
spec:
replicas: 1
selector:
matchLabels:
app: service1
template:
metadata:
labels:
app: service1
spec:
containers:
- name: service1
image: service1
imagePullPolicy: Never
ports:
- containerPort: 8080
---
# service 2
kind: Service
apiVersion: v1
metadata:
name: service2
spec:
selector:
app: service2
ports:
- name: http
protocol: TCP
port: 80
targetPort: 8080
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: service2-deployment
labels:
app: service2
spec:
replicas: 1
selector:
matchLabels:
app: service2
template:
metadata:
labels:
app: service2
spec:
containers:
- name: service2
image: service2
imagePullPolicy: Never
ports:
- containerPort: 8080
Issue
I use docker exec -it to go inside docker container. I can curl service1 from service2 container without any issue. However, if I try to curl service2 from service2 container, it gets a timeout connection error.
Results from curl -v service2
Rebuilt URL to: service2/
Trying 10.101.116.46...
TCP_NODELAY set
connect to 10.101.116.46 port 80 failed: Connection timed out
Failed to connect to service2 port 80: Connection timed out
Closing connection 0
curl: (7) Failed to connect to service2 port 80: Connection timed out
I guess the DNS records gets resolved correctly, because 10.101.116.46 is the correct IP attached to service2. Then what could be the issue cause this problem?
More Followup Tests
From my understanding, the Kubernetes service internally maps the port to container port, so in my case it maps service port 80 to pod port 8080. From service2 container, I am able to curl <service2 pod ip>:8080 successfully, but I am not able to curl <service2 ip>, which resolves connection time out error. And this happens exactly the same inside the service1 container that it can access pod but no service. I do not understand is there any internal setting that I miss?
This could be any of these:
The pod servicing service2 has a service that is listening on 127.0.0.1 or not listening on 0.0.0.0 (Any IP address)
service2 has a redirect and your service only listen on port 80. You would have to enable the other port (possibly 443) and run curl with the -L option to follow the link.
The pod servicing service2 is not even listening on port 80.
I try to remote debug the application in attached mode with host: 192.168.99.100 and port 5005, but it tells me that it is unable to open the debugger port. The IP is 192.268.99.100 (the cluster is hosted locally via minikube).
Output of kubectl describe service catalogservice
Name: catalogservice
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=catalogservice
Type: NodePort
IP: 10.98.238.198
Port: web 31003/TCP
TargetPort: 8080/TCP
NodePort: web 31003/TCP
Endpoints: 172.17.0.6:8080
Port: debug 5005/TCP
TargetPort: 5005/TCP
NodePort: debug 32003/TCP
Endpoints: 172.17.0.6:5005
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
This is the pods service.yml:
apiVersion: v1
kind: Service
metadata:
name: catalogservice
spec:
type: NodePort
selector:
app: catalogservice
ports:
- name: web
protocol: TCP
port: 31003
nodePort: 31003
targetPort: 8080
- name: debug
protocol: TCP
port: 5005
nodePort: 32003
targetPort: 5005
And in here I expose the containers port
spec:
containers:
- name: catalogservice
image: elps/myimage
ports:
- containerPort: 8080
name: app
- containerPort: 5005
name: debug
The way I build the image:
FROM openjdk:11
VOLUME /tmp
EXPOSE 8082
ADD /target/catalogservice-0.0.1-SNAPSHOT.jar catalogservice-0.0.1-SNAPSHOT.jar
ENTRYPOINT ["java", "-agentlib:jdwp=transport=dt_socket,address=5005,server=y,suspend=n", "-jar", "catalogservice-0.0.1-SNAPSHOT.jar"]
When I execute nmap -p 5005 192.168.99.100 I receive
PORT STATE SERVICE
5005/tcp closed avt-profile-2
When I execute nmap -p 32003 192.168.99.100 I receive
PORT STATE SERVICE
32003/tcp closed unknown
When I execute nmap -p 31003 192.168.99.100 I receive
PORT STATE SERVICE
31003/tcp open unknown
When I execute kubectl get services I receive
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
catalogservice NodePort 10.108.195.102 <none> 31003:31003/TCP,5005:32003/TCP 14m
minikube service customerservice --url returns
http://192.168.99.100:32004
As an alternative to using a NodePort in a Service you could also use kubectl port-forward to access the debug port in your Pod.
kubectl port-forward allows using resource name, such as a pod name, to select a matching pod to port forward to since Kubernetes v1.10.
You need to expose the debug port in the Deployment yaml for the Pod
spec:
containers:
...
ports:
...
- containerPort: 5005
Then get the name of your Pod via
kubectl get pods
and then add a port-forwarding to that Pod
kubectl port-forward podname 5005:5005
In IntelliJ you will be able to connect to
Host: localhost
Port: 5005
Alternatively, you can use the Cloud Code Intellij plugin.
Also, if you use Fabric8, it provides the fabric8:debug goal.
There was a slip in the yaml you first posted as:
- containerPort: 5050
name: debug
Should be:
- containerPort: 5005
name: debug
You also need to use the external port of 32003 when configuring the IntelliJ debugger. With those changes it should work.
You may also want to think about how to make it more flexible. In the past when I've done this I've used a different form for the docker start command that allows you to turn remote debug on and off by an environment variable called REMOTE_DEBUG, which for you would be:
CMD if [ "x$REMOTE_DEBUG" = "xfalse" ] ; then java $JAVA_OPTS -jar catalogservice-0.0.1-SNAPSHOT.jar ; else java $JAVA_OPTS -agentlib:jdwp=transport=dt_socket,address=5005,server=y,suspend=n -jar catalogservice-0.0.1-SNAPSHOT.jar ; fi
You'll probably find you want to set the env var $JAVA_OPTS to limit jvm memory use to avoid issues in k8s.