Kubernetes - minikube service connection timeout - docker

I have a Docker environment with 3 containers: nginx, PHP with Laravel and a MySQL database. It works fine, and I'm now trying to learn Kubernetes.
I was hoping to create a deployment and a service just for the nginx container to make it simple to start with:
Here is the deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: toolkit-app-deployment
spec:
replicas: 1
selector:
matchLabels:
container: toolkit-server
template:
metadata:
labels:
container: toolkit-server
spec:
containers:
- name: toolkit-server
image: my/toolkit-server:test
ports:
- containerPort: 8000
imagePullSecrets:
- name: my-cred
Here is the service.yaml:
apiVersion: v1
kind: Service
metadata:
name: backend
spec:
selector:
container: toolkit-server
ports:
- protocol: "TCP"
port: 80
targetPort: 8000
type: LoadBalancer
And just incase it's needed, here is the nginx part of the docker-compose.yaml:
version: "3.8"
services:
server:
build:
context: .
dockerfile: dockerfiles/nginx.dockerfile
ports:
- "8000:80"
volumes:
- ./src:/var/www/html
- ./nginx/nginx.conf:/etc/nginx/conf.d/default.conf:ro
container_name: toolkit-server
The deployment is created successfully and I can see that 1/1 pods are running.
However when I run minikube service backend, the URL I get just times out.
I was expecting to see some sort of nginx page, maybe an nginx error - but with a time out I'm not sure what the next step is.
I'm brand new to Kubernetes to theres a good chance I've messed the ports up or something basic. Any help appreciated.
Edit:
As advised by #david-maze I changed the following in the deployment.yaml:
ports:
- containerPort: 80
And the following in service.yaml:
targetPort: 80
This gave me an nginx error page when viewed in the browser, as expected, but crucially no longer times out.

This is a community wiki answer posted for better visibility. Feel free to expand it.
As discussed in the comments the issue was due to wrong port configuration.
TargetPort is the port on which the service will send requests to, that your pod will be listening on. Your application in the container will need to be listening on this port also.
ContainerPort defines the port on which app can be reached out inside the container.
So in your use case the deployment should have:
ports:
- containerPort: 80
and the service:
targetPort: 80
This will make the connection not to timeout anymore.

Related

Multi Container ASP.NET Core app in a Kubernetes Pod gives error address already in use

I have an ASP.NET Core Multi-Container docker app which I am now trying to host to Kubernetes cluster on my local PC. But unfortunately one container is starting and other is giving error address already in use.
The Deployment file is given below:
apiVersion: apps/v1
kind: Deployment
metadata:
name: multi-container-dep
labels:
app: aspnet-core-multi-container-app
spec:
replicas: 1
selector:
matchLabels:
component: multi-container
template:
metadata:
labels:
component: multi-container
spec:
containers:
- name: cmultiapp
image: multiapp
imagePullPolicy: Never
ports:
- containerPort: 80
- name: cmultiapi
image: multiapi
imagePullPolicy: Never
ports:
- containerPort: 81
The full logs of the container which is failing is:
Unable to start Kestrel.
System.IO.IOException: Failed to bind to address http://[::]:80: address already in use.
---> Microsoft.AspNetCore.Connections.AddressInUseException: Address already in use
---> System.Net.Sockets.SocketException (98): Address already in use
at System.Net.Sockets.Socket.UpdateStatusAfterSocketErrorAndThrowException(SocketError error, String callerName)
at System.Net.Sockets.Socket.DoBind(EndPoint endPointSnapshot, SocketAddress socketAddress)
at System.Net.Sockets.Socket.Bind(EndPoint localEP)
at Microsoft.AspNetCore.Server.Kestrel.Transport.Sockets.SocketConnectionListener.<Bind>g__BindSocket|13_0(<>c__DisplayClass13_0& )
--- End of inner exception stack trace ---
at Microsoft.AspNetCore.Server.Kestrel.Transport.Sockets.SocketConnectionListener.<Bind>g__BindSocket|13_0(<>c__DisplayClass13_0& )
at Microsoft.AspNetCore.Server.Kestrel.Transport.Sockets.SocketConnectionListener.Bind()
at Microsoft.AspNetCore.Server.Kestrel.Transport.Sockets.SocketTransportFactory.BindAsync(EndPoint endpoint, CancellationToken cancellationToken)
at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.Infrastructure.TransportManager.BindAsync(EndPoint endPoint, ConnectionDelegate connectionDelegate, EndpointConfig endpointConfig)
Note that I already tried putting another port to that container in the YAML file
ports:
- containerPort: 81
But it seems to not working. How to fix it?
To quote this answer: https://stackoverflow.com/a/62057548/12201084
containerPort as part of the pod definition is only informational purposes.
This means that setting containerPort does not have any influence on what port application opens. You can even skip it and don't set it at all.
If you want your application to open a specific port you need to tell it to the applciation. It's usually done with flags, envs or configfiles. Setting a port in pod/container yaml definition won't change a thing.
You have to remember that k8s network model is different than docker and docker compose's model.
So why does the containerPort field exist if is doesn't do a thing? - you may ask
Well. Actually is not completely true. It's main puspose is indeed for informational/documenting purposes but it may also be used with services. You can name a port in pod definition and then use this name to reference the port in service definition yaml (this only applies to targetPort field).
Check whether your images exposes the same port or try to use the same port (see in the images Dockerfile).
I suppose, it is because of your images may be trying to start anything in the same port, so when first one get created it create perfectly but during second container creation it tries to use the same port, and it gets bind: address already in use error.
You can see the pod logs for one of your container (by kubectl logs <pod_name> <container_name>) then you will be clear.
I tried applying your yaml with one of my docker image (which used to start a server in 8080 port), then after applying the below yaml I got the same error as you got.
apiVersion: apps/v1
kind: Deployment
metadata:
name: multi-container-dep
labels:
app: aspnet-core-multi-container-app
spec:
replicas: 1
selector:
matchLabels:
component: multi-container
template:
metadata:
labels:
component: multi-container
spec:
containers:
- name: cmultiapp
image: shahincsejnu/httpapiserver:v1.0.5
imagePullPolicy: Always
ports:
- containerPort: 8080
- name: cmultiapi
image: shahincsejnu/httpapiserver:v1.0.5
imagePullPolicy: Always
ports:
- containerPort: 8081
I saw the first pod's log which ran successfully by kubectl logs pod/multi-container-dep-854c78cfd4-7jd6n cmultiapp and the result is :
int port : :8080
start called
Then I saw the second pod's log which crashed by kubectl logs pod/multi-container-dep-854c78cfd4-7jd6n cmultiapi and seen the below error:
int port : :8080
start called
2021/03/20 13:49:24 listen tcp :8080: bind: address already in use # this is the reason of the error
So, I suppose your images also do something like that.
What works
This below yamls ran successfully both container:
1.
apiVersion: apps/v1
kind: Deployment
metadata:
name: multi-container-dep
labels:
app: aspnet-core-multi-container-app
spec:
replicas: 1
selector:
matchLabels:
component: multi-container
template:
metadata:
labels:
component: multi-container
spec:
containers:
- name: cmultiapp
image: shahincsejnu/httpapiserver:v1.0.5
imagePullPolicy: Always
ports:
- containerPort: 8080
- name: cmultiapi
image: nginx
imagePullPolicy: Always
ports:
- containerPort: 80
apiVersion: apps/v1
kind: Deployment
metadata:
name: multi-container-dep
labels:
app: aspnet-core-multi-container-app
spec:
replicas: 1
selector:
matchLabels:
component: multi-container
template:
metadata:
labels:
component: multi-container
spec:
containers:
- name: cmultiapp
image: shahincsejnu/httpapiserver:v1.0.5
imagePullPolicy: Always
ports:
- containerPort: 80
- name: cmultiapi
image: nginx
imagePullPolicy: Always
ports:
- containerPort: 8081
If you have a docker compose yaml, please use Kompose Tool to convert it into Kubernetes Objects.
Below is the documentation link
https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/
Please use kubectl explain to understand every field of your deployment yaml
As can be seen in below explanation for ports, ports list in deployment yaml is primarily informational.
Since both the containers in the Pod share the same Network Namespace, the processes running inside the containers cannot use the same ports.
kubectl explain deployment.spec.template.spec.containers.ports
KIND: Deployment
VERSION: apps/v1
RESOURCE: ports <[]Object>
DESCRIPTION:
List of ports to expose from the container. Exposing a port here gives the
system additional information about the network connections a container
uses, but is primarily informational. Not specifying a port here DOES NOT
prevent that port from being exposed. Any port which is listening on the
default "0.0.0.0" address inside a container will be accessible from the
network. Cannot be updated.
ContainerPort represents a network port in a single container.
FIELDS:
containerPort <integer> -required-
Number of port to expose on the pod's IP address. This must be a valid port
number, 0 < x < 65536.
hostIP <string>
What host IP to bind the external port to.
hostPort <integer>
Number of port to expose on the host. If specified, this must be a valid
port number, 0 < x < 65536. If HostNetwork is specified, this must match
ContainerPort. Most containers do not need this.
name <string>
If specified, this must be an IANA_SVC_NAME and unique within the pod. Each
named port in a pod must have a unique name. Name for the port that can be
referred to by services.
protocol <string>
Protocol for port. Must be UDP, TCP, or SCTP. Defaults to "TCP".
Please provide the Dockerfile for both images and docker compose files or docker run commands or docker service create commands for the existing multi container docker application for futher help.
I solved this by using environment variables and assigning aspnet url to port 81.
- name: cmultiapi
image: multiapi
imagePullPolicy: Never
ports:
- containerPort: 81
env:
- name: ASPNETCORE_URLS
value: http://+:81
I would also like to mention the url where I got the necessary help. Link is here.

Presto cluster on Kubernetes error worker bind port 8080

I'm setting up a Presto cluster with 1 coordinator and 1 worker. I have used the same own images only with Docker and it worked.
However, when I pass to Kubernetes I get an error in the worker node when initialising Presto:
ERROR main com.facebook.presto.server.PrestoServer
Unable to create injector, see the following errors:
1) Error in custom provider, java.lang.NullPointerException
at com.facebook.airlift.discovery.client.DiscoveryBinder.bindServiceAnnouncement(DiscoveryBinder.java:79)
while locating com.facebook.airlift.discovery.client.ServiceAnnouncement annotated with #com.google.inject.internal.Element(setName=,uniqueId=146, type=MULTIBINDER, keyType=)
while locating java.util.Set
3) Error injecting constructor, java.io.UncheckedIOException: Failed to bind to /0.0.0.0:8080
at com.facebook.airlift.http.server.HttpServerInfo.(HttpServerInfo.java:48)
deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: presto-cluster
namespace: presto-clu2
spec:
replicas: 1
selector:
matchLabels:
app: presto-c
template:
metadata:
labels:
app: presto-c
spec:
containers:
- name: presto-co
image: x/openjdk-presto-k:1.0
imagePullPolicy: Always
ports:
- containerPort: 8080
- name: presto-wo
image: x/openjdk-prestoworker-k:1.0
imagePullPolicy: Always
ports:
- containerPort: 8181
service
apiVersion: v1
kind: Service
metadata:
name: presto-cluster
namespace: presto-clu2
spec:
selector:
app: presto-c
ports:
- protocol: TCP
port: 8080
targetPort: 8080
type: NodePort
When going up with namespace, service and deployment only the coordinator gets operative.
It seems something related to the worker not being able to bind in port 8080 for the discovery of the coordinator. I know that inside a pod, all containers share ports and that could be the issue here, but I don't know well enough the technologies to check it and potentially change the port in the worker.
Do you have any idea of the issue?

How do I set these docker-compose ports in a kubernetes yaml file?

Given the following ports defined in a docker-compose.yml file, how do I do the equivalent in a kubernetes yml file?
docker-compose.yml
seq.logging:
image: datalust/seq
networks:
- backend
container_name: seq.logging
environment:
- ACCEPT_EULA=Y
ports:
- "5300:80" # UI
- "5301:5341" # Data ingest
kubernetes.yml
---
apiVersion: v1
kind: Pod
metadata:
name: backend-infrastructure
labels:
system: backend
app: infrastructure
spec:
containers:
- name: seq-logging
image: datalust/seq
# ports: ?????????????????????????????????????
# - containerPort: "5300:80" # UI
# - containerPort: "5301:5341" # Data ingest
env:
- name: ACCEPT_EULA
value: "Y"
You do not expose a port using Pod/deployment yaml.
Services are the way to do it. Here you can either use multiple services on top of your pod/deployment but this will result in multiple IP addresses. Other way is to name each port and then create a multi port service definition.
In your case it should look somewhat like this (note this is just a quickly written example). Also
When using multiple ports you must give all of your ports names, so
that endpoints can be disambiguated.
apiVersion: v1
kind: Pod
metadata:
name: backend-infrastructure
labels:
system: backend
app: infrastructure
spec:
containers:
- name: seq-logging
image: datalust/seq
ports:
- containerPort: 80 # UI
name: ui
- containerPort: 5341 # Data ingest
name: data-ingest
env:
- name: ACCEPT_EULA
value: "Y"
---
apiVersion: v1
kind: Service
metadata:
name: seq-logging-service
spec:
type: #service type
ports:
- name: ui
port: 5300
targetPort: 80
- name: data-ingest
port: 5301
targetPort: 5341
Some more resources:
- Docs about connecting applications with services.
- example yaml from the above featuring deployment with multiple port container and corresponding service.
Update:
containerPort
List of ports to expose from the container. Exposing a port here gives
the system additional information about the network connections a
container uses, but is primarily informational. Not specifying a port
here DOES NOT prevent that port from being exposed. Any port which is
listening on the default "0.0.0.0" address inside a container will be
accessible from the network. Cannot be updated.

How to configure multiple services/containers in Kubernetes?

I am new to Docker and Kubernetes.
Technologies used:
Dotnet Core 2.2
Asp.NET Core WebAPI 2.2
Docker for windows(Edge) with Kubernetes support enabled
Code
I am having two services hosted into two docker containers container1 and container2.
Below is my deploy.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: webapi-dockerkube
spec:
replicas: 1
template:
metadata:
labels:
app: webapi-dockerkube
spec:
containers:
- name: webapi-dockerkube
image: "webapidocker:latest"
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
livenessProbe:
httpGet:
path: /api/values
port: 80
readinessProbe:
httpGet:
path: /api/values
port: 80
- name: webapi-dockerkube2
image: "webapidocker2:latest"
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
livenessProbe:
httpGet:
path: /api/other/values
port: 80
readinessProbe:
httpGet:
path: /api/other/values
port: 80
When I am running command:
kubectl create -f .\deploy.yaml
I am getting status as CrashLoopBackOff.
But same is running fine when i have only one container configured.
When checking logs I am getting following error:
Error from server (BadRequest): a container name must be specified for pod webapi-dockerkube-8658586998-9f8mk, choose one of: [webapi-dockerkube webapi-dockerkube2]
You are running two containers in the same pod which bind both to port 80. This is not possible within the same pod.
Think of a pod like a 'server' and you can't have two processes bind to the same port.
Solution in your situation: Use different ports inside the pod or use separate pods. From your deployment there seems to be no shared resources like filesystem, so it would be easy to split the containers to separate pods.
Note that it will not suffice to change the pod definition if you want to have both containers running in the same pod with different ports. The application in the container must bind to a different port as well.
apiVersion: v1
kind: Pod
metadata:
name: two-containers
spec:
restartPolicy: Never
volumes:
- name: shared-data
emptyDir: {}
containers:
- name: nginx-container
image: nginx
volumeMounts:
- name: shared-data
mountPath: /usr/share/nginx/html
- name: debian-container
image: debian
volumeMounts:
- name: shared-data
mountPath: /pod-data
command: ["/bin/sh"]
args: ["-c", "echo Hello from the debian container > /pod-data/index.html"]
here sharing example for multi container you can use this template
Also you can check for logs of using
Kubectl logs
Check reason for crashloop back

How does Kubernetes invoke a Docker image?

I am attempting to run a Flask app via uWSGI in a Kubernetes deployment. When I run the Docker container locally, everything appears to be working fine. However, when I create the Kubernetes deployment on Google Kubernetes Engine, the deployment goes into Crashloop Backoff because uWSGI complains:
uwsgi: unrecognized option '--http 127.0.0.1:8080'.
The image definitely has the http option because:
a. uWSGI was installed via pip3 which includes the http plugin.
b. When I run the deployment with --list-plugins, the http plugin is listed.
c. The http option is recognized correctly when run locally.
I am running the Docker image locally with:
$: docker run <image_name> uwsgi --http 127.0.0.1:8080
The container Kubernetes YAML config is:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: launch-service-example
name: launch-service-example
spec:
replicas: 1
template:
metadata:
labels:
app: launch-service-example
spec:
containers:
- name: launch-service-example
image: <image_name>
command: ["uwsgi"]
args:
- "--http 127.0.0.1:8080"
- "--module code.experimental.launch_service_example.__main__"
- "--callable APP"
- "--master"
- "--processes=2"
- "--enable-threads"
- "--pyargv --test1=3--test2=abc--test3=true"
ports:
- containerPort: 8080
---
kind: Service
apiVersion: v1
metadata:
name: launch-service-example-service
spec:
selector:
app: launch-service-example
ports:
- protocol: TCP
port: 8080
targetPort: 8080
The container is exactly the same which leads me to believe that the way the container is invoked by Kubernetes may be causing the issue. As a side note, I have tried passing all the args via a list of commands with no args which leads to the same result. Any help would be greatly appreciated.
It is happening because of the difference between arguments processing in the console and in the configuration.
To fix it, just split your args like that:
args:
- "--http"
- "127.0.0.1:8080"
- "--module code.experimental.launch_service_example.__main__"
- "--callable"
- "APP"
- "--master"
- "--processes=2"
- "--enable-threads"
- "--pyargv"
- "--test1=3--test2=abc--test3=true"

Resources