Establishing PPTP-connection in a Kubernetes POD - docker

I'm trying to set up a pod running a pptp-client.
I want to access a single machine behind the VPN and this works fine locally , my docker container adds records to my localhost's routing table, all is well.
ip route add x.x.x.x dev ppp0
I am only able to establish a connection to the VPN-server as long as privileged is set to true and network_mode is set to "host"
The production environment is a bit different, the "localhost" would be one of our three operating nodes in our Google Container cluster.
I don't know if the route added after the established connection would be only accessible by the containers operating inside that node.. but this is a later problem.
docker-compose.yml
version: '2'
services:
pptp-tunnel:
build: ./
image: eu.gcr.io/project/image
environment:
- VPN_SERVER=X.X.X.X
- VPN_USER=XXXX
- VPN_PASSWORD=XXXX
privileged: true
network_mode: "host"
This seems to be more difficult to achieve with kubernetes, though both options exist and is declared as you can see in my manifest. (hostNetwork, privileged)
Kubernetes Version
Version 1.6.6
pptp-tunnel.yml
apiVersion: v1
kind: Service
metadata:
name: pptp-tunnel
namespace: default
labels:
spec:
type: ClusterIP
selector:
app: pptp-tunnel
ports:
- name: pptp
port: 1723
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: pptp-tunnel
namespace: default
spec:
replicas: 1
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
type: RollingUpdate
selector:
matchLabels:
app: pptp-tunnel
template:
metadata:
labels:
app: pptp-tunnel
spec:
hostNetwork: true
containers:
- name: pptp-tunnel
env:
- name: VPN_SERVER
value: X.X.X.X
- name: VPN_USER
value: XXXX
- name: VPN_PASSWORD
value: 'XXXXX'
securityContext:
privileged: true
capabilities:
add: ["NET_ADMIN"]
image: eu.gcr.io/project/image
imagePullPolicy: Always
ports:
- containerPort: 1723
I've also tried adding capabilities: NET_ADMIN as you can see, without effect. Setting the container in priviliged mode should disable the security, i shouldn't need both.
Would be nice to not have to set the container to priviliged mode and just rely on capabilities to bring the ppp0 interface up and add the routing.
What happens when the POD starts is that the pptp-client simply keeps sending requests and times out.
(This happens with my docker container locally aswell until i turn network_mode "host" on.)
sent [LCP ConfReq id=0x1 <asyncmap 0x0> <magic 0xa43cd4b4> <pcomp> <accomp>]
LCP: timeout sending Config-Requests
But this is without hostNetwork enabled, if i enable it i simply get a single request sent, and then modem hangup.
using channel 42
Using interface ppp0
Connect: ppp0 <--> /dev/pts/0
sent [LCP ConfReq id=0x7 <asyncmap 0x0> <magic 0xcdae15b8> <pcomp> <accomp>]
Script ?? finished (pid 59), status = 0x0
Script pptp XX.XX.XX.XX --nolaunchpppd finished (pid 60), status = 0x0
Script ?? finished (pid 67), status = 0x0
Modem hangup
Connection terminated.
Declaring the HostNetwork boolean let's me see multiple interfaces shared from the host, so this is working but somehow im not able to establish a connection, i cant figure out why.
Perhaps there is a better solution? I will still need to establish a connection to the VPN-server but adding a routing record to the host may not be the best solution.
Any help is greatly appreciated!

Related

Multi Container ASP.NET Core app in a Kubernetes Pod gives error address already in use

I have an ASP.NET Core Multi-Container docker app which I am now trying to host to Kubernetes cluster on my local PC. But unfortunately one container is starting and other is giving error address already in use.
The Deployment file is given below:
apiVersion: apps/v1
kind: Deployment
metadata:
name: multi-container-dep
labels:
app: aspnet-core-multi-container-app
spec:
replicas: 1
selector:
matchLabels:
component: multi-container
template:
metadata:
labels:
component: multi-container
spec:
containers:
- name: cmultiapp
image: multiapp
imagePullPolicy: Never
ports:
- containerPort: 80
- name: cmultiapi
image: multiapi
imagePullPolicy: Never
ports:
- containerPort: 81
The full logs of the container which is failing is:
Unable to start Kestrel.
System.IO.IOException: Failed to bind to address http://[::]:80: address already in use.
---> Microsoft.AspNetCore.Connections.AddressInUseException: Address already in use
---> System.Net.Sockets.SocketException (98): Address already in use
at System.Net.Sockets.Socket.UpdateStatusAfterSocketErrorAndThrowException(SocketError error, String callerName)
at System.Net.Sockets.Socket.DoBind(EndPoint endPointSnapshot, SocketAddress socketAddress)
at System.Net.Sockets.Socket.Bind(EndPoint localEP)
at Microsoft.AspNetCore.Server.Kestrel.Transport.Sockets.SocketConnectionListener.<Bind>g__BindSocket|13_0(<>c__DisplayClass13_0& )
--- End of inner exception stack trace ---
at Microsoft.AspNetCore.Server.Kestrel.Transport.Sockets.SocketConnectionListener.<Bind>g__BindSocket|13_0(<>c__DisplayClass13_0& )
at Microsoft.AspNetCore.Server.Kestrel.Transport.Sockets.SocketConnectionListener.Bind()
at Microsoft.AspNetCore.Server.Kestrel.Transport.Sockets.SocketTransportFactory.BindAsync(EndPoint endpoint, CancellationToken cancellationToken)
at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.Infrastructure.TransportManager.BindAsync(EndPoint endPoint, ConnectionDelegate connectionDelegate, EndpointConfig endpointConfig)
Note that I already tried putting another port to that container in the YAML file
ports:
- containerPort: 81
But it seems to not working. How to fix it?
To quote this answer: https://stackoverflow.com/a/62057548/12201084
containerPort as part of the pod definition is only informational purposes.
This means that setting containerPort does not have any influence on what port application opens. You can even skip it and don't set it at all.
If you want your application to open a specific port you need to tell it to the applciation. It's usually done with flags, envs or configfiles. Setting a port in pod/container yaml definition won't change a thing.
You have to remember that k8s network model is different than docker and docker compose's model.
So why does the containerPort field exist if is doesn't do a thing? - you may ask
Well. Actually is not completely true. It's main puspose is indeed for informational/documenting purposes but it may also be used with services. You can name a port in pod definition and then use this name to reference the port in service definition yaml (this only applies to targetPort field).
Check whether your images exposes the same port or try to use the same port (see in the images Dockerfile).
I suppose, it is because of your images may be trying to start anything in the same port, so when first one get created it create perfectly but during second container creation it tries to use the same port, and it gets bind: address already in use error.
You can see the pod logs for one of your container (by kubectl logs <pod_name> <container_name>) then you will be clear.
I tried applying your yaml with one of my docker image (which used to start a server in 8080 port), then after applying the below yaml I got the same error as you got.
apiVersion: apps/v1
kind: Deployment
metadata:
name: multi-container-dep
labels:
app: aspnet-core-multi-container-app
spec:
replicas: 1
selector:
matchLabels:
component: multi-container
template:
metadata:
labels:
component: multi-container
spec:
containers:
- name: cmultiapp
image: shahincsejnu/httpapiserver:v1.0.5
imagePullPolicy: Always
ports:
- containerPort: 8080
- name: cmultiapi
image: shahincsejnu/httpapiserver:v1.0.5
imagePullPolicy: Always
ports:
- containerPort: 8081
I saw the first pod's log which ran successfully by kubectl logs pod/multi-container-dep-854c78cfd4-7jd6n cmultiapp and the result is :
int port : :8080
start called
Then I saw the second pod's log which crashed by kubectl logs pod/multi-container-dep-854c78cfd4-7jd6n cmultiapi and seen the below error:
int port : :8080
start called
2021/03/20 13:49:24 listen tcp :8080: bind: address already in use # this is the reason of the error
So, I suppose your images also do something like that.
What works
This below yamls ran successfully both container:
1.
apiVersion: apps/v1
kind: Deployment
metadata:
name: multi-container-dep
labels:
app: aspnet-core-multi-container-app
spec:
replicas: 1
selector:
matchLabels:
component: multi-container
template:
metadata:
labels:
component: multi-container
spec:
containers:
- name: cmultiapp
image: shahincsejnu/httpapiserver:v1.0.5
imagePullPolicy: Always
ports:
- containerPort: 8080
- name: cmultiapi
image: nginx
imagePullPolicy: Always
ports:
- containerPort: 80
apiVersion: apps/v1
kind: Deployment
metadata:
name: multi-container-dep
labels:
app: aspnet-core-multi-container-app
spec:
replicas: 1
selector:
matchLabels:
component: multi-container
template:
metadata:
labels:
component: multi-container
spec:
containers:
- name: cmultiapp
image: shahincsejnu/httpapiserver:v1.0.5
imagePullPolicy: Always
ports:
- containerPort: 80
- name: cmultiapi
image: nginx
imagePullPolicy: Always
ports:
- containerPort: 8081
If you have a docker compose yaml, please use Kompose Tool to convert it into Kubernetes Objects.
Below is the documentation link
https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/
Please use kubectl explain to understand every field of your deployment yaml
As can be seen in below explanation for ports, ports list in deployment yaml is primarily informational.
Since both the containers in the Pod share the same Network Namespace, the processes running inside the containers cannot use the same ports.
kubectl explain deployment.spec.template.spec.containers.ports
KIND: Deployment
VERSION: apps/v1
RESOURCE: ports <[]Object>
DESCRIPTION:
List of ports to expose from the container. Exposing a port here gives the
system additional information about the network connections a container
uses, but is primarily informational. Not specifying a port here DOES NOT
prevent that port from being exposed. Any port which is listening on the
default "0.0.0.0" address inside a container will be accessible from the
network. Cannot be updated.
ContainerPort represents a network port in a single container.
FIELDS:
containerPort <integer> -required-
Number of port to expose on the pod's IP address. This must be a valid port
number, 0 < x < 65536.
hostIP <string>
What host IP to bind the external port to.
hostPort <integer>
Number of port to expose on the host. If specified, this must be a valid
port number, 0 < x < 65536. If HostNetwork is specified, this must match
ContainerPort. Most containers do not need this.
name <string>
If specified, this must be an IANA_SVC_NAME and unique within the pod. Each
named port in a pod must have a unique name. Name for the port that can be
referred to by services.
protocol <string>
Protocol for port. Must be UDP, TCP, or SCTP. Defaults to "TCP".
Please provide the Dockerfile for both images and docker compose files or docker run commands or docker service create commands for the existing multi container docker application for futher help.
I solved this by using environment variables and assigning aspnet url to port 81.
- name: cmultiapi
image: multiapi
imagePullPolicy: Never
ports:
- containerPort: 81
env:
- name: ASPNETCORE_URLS
value: http://+:81
I would also like to mention the url where I got the necessary help. Link is here.

Securing End User-Defined Kubernetes Pods

I am developing a game development platform that allows users to run their game servers within my Kubernetes cluster. What is everything that I need to restrict / configure to prevent malicious users from gaining access to resources they should not be allowed to access such as internal pods, Kubernetes access keys, image pull secrets, etc?
I'm currently looking at Network Policies to restrict access to internal IP addresses, but I'm not sure if they would still be able to enumerate DNS addresses to sensitive internal architecture. Would they still be able to somehow find out how my MongoDB, Redis, Kafka pods are configured?
Also, I'm aware Kubernetes puts an API token at the /var/run/secrets/kubernetes.io/serviceaccount/token path. How do I disable this token from being created? Are there other sensitive files I need to remove / disable?
I've been researching everything I can think of, but I want to make sure that I'm not missing anything.
Pods are defined within a Deployment with a Service, and exposed via Nginx Ingress TCP / UDP ConfigMap. Example Configuration:
---
metadata:
labels:
app: game-server
name: game-server
spec:
replicas: 1
selector:
matchLabels:
app: game-server
template:
metadata:
labels:
app: game-server
spec:
containers:
- image: game-server
name: game-server
ports:
- containerPort: 7777
resources:
requests:
cpu: 500m
memory: 500M
imagePullSecrets:
- name: docker-registry-image-pull-secret
---
metadata:
labels:
app: game-server
service: game-server
name: game-server
spec:
ports:
- name: tcp
port: 7777
selector:
app: game-server
TL;DR: How do I run insecure, end user-defined Pods within my Kubernetes cluster safely?

Eventstore doesn't work in Kubernetes (but works in Docker)

I want to run Eventstore in Kubernetes node. I start the node with minikube start, then I apply this yaml file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: eventstore-deployment
spec:
selector:
matchLabels:
app: eventstore
replicas: 1
template:
metadata:
labels:
app: eventstore
spec:
containers:
- name: eventstore
image: eventstore/eventstore
ports:
- containerPort: 1113
protocol: TCP
- containerPort: 2113
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: eventstore
spec:
selector:
app: eventstore
ports:
- protocol: TCP
port: 1113
targetPort: 1113
---
apiVersion: v1
kind: Service
metadata:
name: eventstore-dashboard
spec:
selector:
app: eventstore
ports:
- protocol: TCP
port: 2113
targetPort: 2113
nodePort: 30113
type: NodePort
the deployment, the replica set and the pod starts, but nothing happens: Eventstore doesn't print to the log, I can't open its dashboard. Also other services can't connect to eventstore:1113. No errors and the pods doesn't crash.
The only I see in logs is "The selected container has not logged any messages yet".
I've tried a clean vanilla minukube node with different vm drivers, and also a node with configured Ambassador + Linkerd. The results are the same.
But when I run Eventstore in Docker with this yaml file via docker-compose
eventstore:
image: eventstore/eventstore
ports:
- '1113:1113'
- '2113:2113'
Everything works fine: Eventstore outputs to logs, other services can connect to it and I can open its dashboard on 2113 port.
UPDATE: Eventstore started working after about 30-40 minutes after deployment. I've tried several times, and had to wait. Other pods start working almost immediately (30 secs - 1 min) after deployment.
As #ligowsky confirmed in comment section, issue was cause due to VM Performance. Posting this as Community Wiki for better visibility.
Minikube as default is running with 2 CPUs and 2048 Memory. More details can be found here.
You can change this if your VM has more resources.
- During Minikube start
$ sudo minikube start --cpus 2 --memory 8192 --vm-driver=<driverType>
- When Minikube is running, however minikube need to be restarted
$ minikube config set memory 4096
⚠️ These changes will take effect upon a minikube delete and then a minikube start
More commands can be found in Minikube docs.
In my case when Minikube resources was 4CPUs and 8192 memory I didn't have any issues with eventstore.
OP's Solution
OP used Kind to run eventstore deployment.
Kind is a tool for running local Kubernetes clusters using Docker
container "nodes". kind is primarily designed for testing Kubernetes
1.11+
Kind documentation can be found here.

Kubernetes and Docker: how to let two service to communicate correctly

I have two Java microservices (caller.jar which calls called.jar)
We can set the caller service http port through an env-var CALLERPORT and the address of called service through an env-var CALLEDADDRESS.
So caller uses two env-var.
We must set also called service env-var CALLEDPORT in order to set the specific http port on which called service is listening for http requests.
I don't know exactly how to simply expose these variables from a Dockerfile, in order to set them using Kubernetes.
Here is how I made the two Dockerfiles:
Dockerfile of caller
FROM openjdk:8-jdk-alpine
# ENV CALLERPORT (it's own port)
# ENV CALLEDADDRESS (the other service address)
ADD caller.jar /
CMD ["java", "-jar", "caller.jar"]
Dockerfile of called
FROM openjdk:8-jdk-alpine
# ENV CALLEDPORT (it's own port)
ADD called.jar /
CMD ["java", "-jar", "called.jar"]
With these I've made two Docker images:
myaccount/caller
myaccount/called
Then I've made the two deployments.yaml in order to let K8s deploy (on minikube) the two microservices using replicas and loadbalancers.
deployment-caller.yaml
apiVersion: v1
kind: Service
metadata:
name: caller-loadbalancer
spec:
type: LoadBalancer
ports:
- port: 8080
targetPort: 8080
selector:
app: caller
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: caller
labels:
app: caller
spec:
replicas: 2
minReadySeconds: 15
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
selector:
matchLabels:
app: caller
tier: caller
strategy:
type: Recreate
template:
metadata:
labels:
app: caller
tier: caller
spec:
containers:
- image: myaccount/caller
name: caller
env:
- name: CALLERPORT
value: "8080"
- name: CALLEDADDRESS
value: called-loadbalancer # WHAT TO PUT HERE?!
ports:
- containerPort: 8080
name: caller
And deployment-called.yaml
apiVersion: v1
kind: Service
metadata:
name: called-loadbalancer
spec:
type: LoadBalancer
ports:
- port: 8081
targetPort: 8081
selector:
app: called
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: called
labels:
app: called
spec:
replicas: 2
minReadySeconds: 15
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
selector:
matchLabels:
app: called
tier: called
strategy:
type: Recreate
template:
metadata:
labels:
app: called
tier: called
spec:
containers:
- image: myaccount/called
name: called
env:
- name: CALLEDPORT
value: "8081"
ports:
- containerPort: 8081
name: called
IMPORTANT:
The single services work well if called singularly (such as calling an healthcheck endpoint) but, when calling the endpoint which involves the communication between the two services, then there is this error:
java.net.UnknownHostException: called
The pods are correctly running and active, but i guess the problem is the part of deployment.yaml in which I must define how to find the pointed service, so here:
spec:
containers:
- image: myaccount/caller
name: caller
env:
- name: CALLERPORT
value: "8080"
- name: CALLEDADDRESS
value: called-loadbalancer # WHAT TO PUT HERE?!
ports:
- containerPort: 8080
name: caller
Neither
called
nor
called-loadbalancer
nor
http://caller
kubectl get pods,svc -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/called-855cc4d89b-4gf97 1/1 Running 0 3m23s 172.17.0.4 minikube <none> <none>
pod/called-855cc4d89b-6268l 1/1 Running 0 3m23s 172.17.0.5 minikube <none> <none>
pod/caller-696956867b-9n7zc 1/1 Running 0 106s 172.17.0.6 minikube <none> <none>
pod/caller-696956867b-djwsn 1/1 Running 0 106s 172.17.0.7 minikube <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/called-loadbalancer LoadBalancer 10.99.14.91 <pending> 8081:30161/TCP 171m app=called
service/caller-loadbalancer LoadBalancer 10.107.9.108 <pending> 8080:30078/TCP 65m app=caller
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 177m <none>
works if put in that line of the deployment.yaml.
So what to put in this line?
The short answer is you don't need to expose them in the Dockerfile. You can set any environment variables you want when you start a container and they don't have to be specified upfront in the Dockerfile.
You can verify this by starting a container using 'docker run' with '-e' to set env vars and '-it' to get an interactive session. The echo the value of your env var and you'll see it is set.
You can also get a terminal session with one of the containers in your running kubernetes Pod with 'kubectl exec' (https://kubernetes.io/docs/tasks/debug-application-cluster/get-shell-running-container/). From there you can echo environment variables from there to see that they are set. You can see them more quickly with 'kubectl describe pod ' after getting the pod name with 'kubectl get pods'.
Since you are having problems, you also want to check whether your services are working correctly. Since you are using minikube you can do 'minikube service ' to check they can be accessed externally. You'll also want to check the internal access - see Accessing spring boot controller endpoint in kubernetes pod
Your approach of using service names and ports is valid. With a bit of debugging you should be able to get it working. Your setup is similar to an illustration I did in https://dzone.com/articles/kubernetes-namespaces-explained so referring to that might help (except you are using env vars directly instead of through a configmap but it amounts to the same).
I think that in the caller you are injecting the wrong port in the env var - you are putting the caller's own port and not the port of what it is trying to call.
To access services inside Kubernetes you should use this DNS:
http://caller-loadbalancer.default.svc.cluster.local:8080
http://called-loadbalancer.default.svc.cluster.local:8081
First of all - it's completely impossible to understand what do you want. Your post starts from:
We can set...
We must set...
Nobody here doesn't know what do you want to do and it could be much more useful to see some definition of done you are expecting.
This having been said now, I have to turn to your substantive question...
env:
- name: CALLERPORT
value: "8080"
- name: CALLEDADDRESS
value: called-loadbalancer # WHAT TO PUT HERE?!
ports:
- containerPort: 8080
name: caller
This things will be exported by k8s automatically. For example, i have service kibana with a port:80 in service definition:
svc/kibana ClusterIP 10.222.81.249 <none> 80/TCP 1y app=kibana
this is how I can get this within the different pod which is in the same namespace:
root#some-pod:/app# env | grep -i kibana
KIBANA_SERVICE_PORT=80
KIBANA_SERVICE_HOST=10.222.81.249
Moving forward, why do you use LoadBalancer? Without any cloud it will be the similar to NodePort, but seems like ClusterIP is all you need.
Next, service ports can be the same and there won't be any port collisions, just because ClusterIP is unique every time and therefore socket will be unique for each service. Your services could be described like this:
apiVersion: v1
kind: Service
metadata:
name: caller-loadbalancer
spec:
type: LoadBalancer
ports:
- port: 80 <--------------------
targetPort: 8080
selector:
app: caller
apiVersion: v1
kind: Service
metadata:
name: called-loadbalancer
spec:
type: LoadBalancer
ports:
- port: 80 <------------------
targetPort: 8081
selector:
app: called
That would simplify using service names just by names without ports specifying:
http://caller-loadbalancer.default.svc.cluster.local
http://called-loadbalancer.default.svc.cluster.local
or
http://caller-loadbalancer.default
http://called-loadbalancer.default
or (within the similar namespace):
http://caller-loadbalancer
http://called-loadbalancer
or (depending on the lib)
caller-loadbalancer
called-loadbalancer
Same things about containerPort/targetPort! Why do you use 8081 and 8080? Who cares about internal container ports? I agree different cases happen, but in this case you have a single process inside and you are definitely not going to run some more processes there, are you? So they also could be the same.
I'd like to advise you to use stackoverflow different way. Do not ask how to do something your way, much better to ask how to do something the best way

Kubernetes + Metallb: Nginx pod not receiving traffic with Local traffic Policy, Layer 2 mode

What happened:
I changed my nginx service's externalTrafficPolicy to Local and now my nginx pod no longer receives traffic
What you expected to happen:
The nginx pod will continue to get traffic, but with the source ip intact. Using Layer 2 mode
Environment:
MetalLB version: 0.7.1
Kubernetes version: latest
OS (e.g. from /etc/os-release): centos7
Kernel (e.g. uname -a): Linux K8SM1 3.10.0-862.3.2.el7.x86_64 #1 SMP Mon May 21 23:36:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
I have an nginx pod that listens for UDP on port 80, and redirects the UDP packet to 192.168.122.206:8080
I have a simple udp server that listens on 192.168.122.206:8080. This was working fine, but I needed to know the original source IP and port of the packet so I changed my service's traffic policy to local.
Now, the pod doesn't seem to get traffic.
I am running a single node bare metal cluster.
I have tried doing "kubectl logs pod-name" but nothing shows up, leading me to believe the pod isn't getting traffic at all.
I am making sure that my UDP packet is being sent to the external ip of the nginx service and port 80.
my nginx.conf from which I built the image:
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
# Load dynamic modules. See /usr/share/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
stream {
server {
listen 80 udp;
proxy_pass 192.168.122.206:8080;
}
}
My nginx deployment and service
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: nginx
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: asvilla/custom_nginx2:first
ports:
- name: http
containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
externalTrafficPolicy: Local
ports:
- name: http
port: 80
protocol: UDP
targetPort: 80
selector:
app: nginx
type: LoadBalance
I have set verbosity of my pods and containers logs to 9. They show nothing new when I send the packet.
I also set verbosity to 9 for "kubectl describe service nginx" and that doesn't show anything new when I send the packet.
My best guess here is that something is going wrong with kube-proxy? Also the fact that my master is my only node might be affecting something, although when I set it up I untainted it and allowed the scheduler to treat it as a worker node.
Due to the fact that you have already pointed Service to route the network traffic via UDP protocol, I guess this should also be allowed for Nginx Deployment, adding protocol: UDP parameter:
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: nginx
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: asvilla/custom_nginx2:first
ports:
- name: http
containerPort: 80
protocol: UDP

Resources