On a single Ubuntu 14.04 box
I've followed the same configuration as
http://dojoblog.dellemc.com/dojo/deploy-kafka-cluster-kubernetes/
I use Kubernetes version v1.10.2
( I also use apiVersion: apps/v1 in yml files. )
Basically I have setup a kubernetes service for kafka, and a kafka deployment using image wurstmeister/kafka. Zookeeper is working ok. Zookeeper and Kafka services are up.
Kafka deployment is configured as per the blog : KAFKA_ADVERTISED_HOST_NAME = the kafa service cluster IP which is for me 10.106.84.132
deployment config :
....
containers:
- name: kafka
image: wurstmeister/kafka
ports:
- containerPort: 9092
env:
- name: KAFKA_ADVERTISED_PORT
value: "9092"
- name: KAFKA_ADVERTISED_HOST_NAME
value: 10.106.84.132
- name: KAFKA_ZOOKEEPER_CONNECT
value: zoo1:2181
- name: KAFKA_BROKER_ID
value: "1"
- name: KAFKA_CREATE_TOPICS
value: topic1:3:3
Then I test the kafka subscribe and publish from outside the kafka container on my host, but that fails as follow :
root#edmitchell-virtual-machine:~# kafkacat -b 10.106.84.132:9092 -t topic1
% Auto-selecting Consumer mode (use -P or -C to override)
% ERROR: Topic topic1 error: Broker: Leader not available
The best I could do overall was
I delete and recreate a kafka deployment with
name: KAFKA_ADVERTISED_HOST_NAME
value: localhost
I can then subscribe and publish but only from within the kafka container, it doesn't work from outside. If I change the value to anything else than localhost, nothing works.
Any idea ?
It looks as if Kafka is not good to be used with Kubernetes ?
maybe I should not deploy Kafka not using kubernetes..
many thanks
ed
Thank you, I understand better now the nodeport function.
I still have the same issue :
root#fnature-virtual-machine:~/Zookeeper# kafkacat -b 192.168.198.160:32748 -t topic1 % Auto-selecting Consumer mode (use -P or -C to override) % ERROR: Topic topic1 error: Broker: Leader not available
I created the nodeport service as you said.
kafka-nodeport NodePort 10.111.234.104 9092:32748/TCP 27m
kafka-service LoadBalancer 10.106.84.132 9092:30351/TCP 1d
I also delete/create the kafka deployment with following env :
KAFKA_ADVERTISED_PORT: 32748
KAFKA_ADVERTISED_HOST_NAME: 192.168.198.160
KAFKA_ZOOKEEPER_CONNECT: zoo1:2181
KAFKA_BROKER_ID: 1
KAFKA_CREATE_TOPICS: topic1:3:3
—
also if I run the following from inside the kafka container, I get similar error
"Leader not available". kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic topic1 --from-beginning
if I create the kafka deployment with KAFKA_ADVERTISED_HOST_NAME: localhost, then above command works inside the kafka container
and 192.168.198.160 is the ip of default interface ens33 in my Ubuntu VM
I can’t seem to find any logs for kafka
Kafka broker registers an address to zookeeper via KAFKA_ADVERTISED_HOST_NAME. However, this address is a kubernetes cluster ip (10.106.84.132), which is only reachable within Kubernetes cluster. So a client outside the cluster can not reach Kafka broker using this address.
To resolve this problem, you can expose kafka service to a public ip, either through NodePort or LoadBalancer. For example, run kubectl expose svc $YOUR_KAFKA_SERVICE_NAME --name=kafka-nodeport --type=NodePort, then lookup what nodeport is exposed: kubectl get svc kafka-nodeport -o yaml | grep nodePort. In this example, kafka broker will be accessible via this address: $KUBERNETES_NODE_IP:$NODEPORT.
in k8s, deployment kafka.yaml
env:
- name: KAFKA_BROKER_ID
value: "1"
- name: KAFKA_CREATE_TOPICS
value: "test:1:1"
- name: KAFKA_ZOOKEEPER_CONNECT
value: "zookeeper:2181"
- name: KAFKA_ADVERTISED_LISTENERS
value: "INSIDE://:9092,OUTSIDE://kafka-com:30322"
- name: KAFKA_LISTENERS
value: "INSIDE://:9092,OUTSIDE://:30322"
- name: KAFKA_LISTENER_SECURITY_PROTOCOL_MAP
value: "INSIDE:PLAINTEXT,OUTSIDE:PLAINTEXT"
- name: KAFKA_INTER_BROKER_LISTENER_NAME
value: "INSIDE"
kafka service,the external service invocation address, or traefik proxy address
---
kind: Service
apiVersion: v1
metadata:
name: kafka-com
namespace: dev
labels:
k8s-app: kafka
spec:
selector:
k8s-app: kafka
ports:
- port: 9092
name: innerport
targetPort: 9092
protocol: TCP
- port: 30322
name: outport
targetPort: 30322
protocol: TCP
nodePort: 30322
type: NodePort
Ensure that Kafka external port and nodePort are consistent,Other services call kafka-com:30322, my blog write this config_kafka_in_kubernetes, hope to help U !
Related
I am trying to connect my pod from Kubernetes (k8s) cluster to a remote Jaeger server. I've tested and it can work well if both of them are on the same machine. However, when I run my app on k8s, my app can not connect to Jaeger despite I were using physical IP.
First, I've tried this:
containers:
- name: api
env:
- name: OTEL__AGENT_HOST
value: <my-physical-ip>
- name: OTEL__AGENT_PORT
value: "6831"
After read the docs from the internet, I add the Jaeger agent to my deployments as a sidecar container like this.
containers:
- name: api
env:
- name: OTEL__AGENT_HOST
value: "localhost"
- name: OTEL__AGENT_PORT
value: "6831"
- image: jaegertracing/jaeger-agent
name: jaeger-agent
ports:
- containerPort: 5775
protocol: UDP
- containerPort: 6831
protocol: UDP
- containerPort: 6832
protocol: UDP
- containerPort: 5778
protocol: TCP
args: ["--reporter.grpc.host-port=<my-physical-ip>:14250"]
It seems work very well on both containers. But on the collector of Jaeger, I received a log like this:
{"level":"warn","ts":1641987200.2678068,"caller":"channelz/logging.go:62","msg":"[core]grpc: Server.Serve failed to create ServerTransport: connection error: desc = \"transport: http2Server.
HandleStreams failed to receive the preface from client: read tcp 172.20.0.4:14250-><the-ip-of-machine-my-pods-are-deployed>:32852: i/o timeout\"","system":"grpc","grpc_log":true}
I exposed port 14267 on Jaeger collector on remote machine, then change args: ["--reporter.grpc.host-port=<my-physical-ip>:14250"] to args: ["--reporter.grpc.host-port=<my-physical-ip>:14267"] and it works.
Have you tried using jaeger operator? https://github.com/jaegertracing/jaeger-operator
This is how you will install it :
kubectl create namespace observability
kubectl create -f https://github.com/jaegertracing/jaeger-operator/releases/download/v1.31.0/jaeger-operator.yaml -n observability
then you can create Jaeger instance that will up jaeger components like collector, agent, query . You can define storage too .. like elastic search for e.g.
apiVersion: jaegertracing.io/v1
kind: Jaeger
metadata:
name: simple-prod-es
spec:
strategy: production
storage:
type: elasticsearch
options:
es:
server-urls: https://search-test-g7fbo7pzghdquvvgxty2pc6lqu.us-east-2.es.amazonaws.com
index-prefix: jaeger-span
username: test
password: xxxeee
Then in your application's deployment yaml file you will need to configure agent as a side car (or u can use agent as deamonset) so that request can be forwarded to the collector ..
More details here: https://www.jaegertracing.io/docs/1.31/operator/#deployment-strategies
I have an ASP.NET Core Multi-Container docker app which I am now trying to host to Kubernetes cluster on my local PC. But unfortunately one container is starting and other is giving error address already in use.
The Deployment file is given below:
apiVersion: apps/v1
kind: Deployment
metadata:
name: multi-container-dep
labels:
app: aspnet-core-multi-container-app
spec:
replicas: 1
selector:
matchLabels:
component: multi-container
template:
metadata:
labels:
component: multi-container
spec:
containers:
- name: cmultiapp
image: multiapp
imagePullPolicy: Never
ports:
- containerPort: 80
- name: cmultiapi
image: multiapi
imagePullPolicy: Never
ports:
- containerPort: 81
The full logs of the container which is failing is:
Unable to start Kestrel.
System.IO.IOException: Failed to bind to address http://[::]:80: address already in use.
---> Microsoft.AspNetCore.Connections.AddressInUseException: Address already in use
---> System.Net.Sockets.SocketException (98): Address already in use
at System.Net.Sockets.Socket.UpdateStatusAfterSocketErrorAndThrowException(SocketError error, String callerName)
at System.Net.Sockets.Socket.DoBind(EndPoint endPointSnapshot, SocketAddress socketAddress)
at System.Net.Sockets.Socket.Bind(EndPoint localEP)
at Microsoft.AspNetCore.Server.Kestrel.Transport.Sockets.SocketConnectionListener.<Bind>g__BindSocket|13_0(<>c__DisplayClass13_0& )
--- End of inner exception stack trace ---
at Microsoft.AspNetCore.Server.Kestrel.Transport.Sockets.SocketConnectionListener.<Bind>g__BindSocket|13_0(<>c__DisplayClass13_0& )
at Microsoft.AspNetCore.Server.Kestrel.Transport.Sockets.SocketConnectionListener.Bind()
at Microsoft.AspNetCore.Server.Kestrel.Transport.Sockets.SocketTransportFactory.BindAsync(EndPoint endpoint, CancellationToken cancellationToken)
at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.Infrastructure.TransportManager.BindAsync(EndPoint endPoint, ConnectionDelegate connectionDelegate, EndpointConfig endpointConfig)
Note that I already tried putting another port to that container in the YAML file
ports:
- containerPort: 81
But it seems to not working. How to fix it?
To quote this answer: https://stackoverflow.com/a/62057548/12201084
containerPort as part of the pod definition is only informational purposes.
This means that setting containerPort does not have any influence on what port application opens. You can even skip it and don't set it at all.
If you want your application to open a specific port you need to tell it to the applciation. It's usually done with flags, envs or configfiles. Setting a port in pod/container yaml definition won't change a thing.
You have to remember that k8s network model is different than docker and docker compose's model.
So why does the containerPort field exist if is doesn't do a thing? - you may ask
Well. Actually is not completely true. It's main puspose is indeed for informational/documenting purposes but it may also be used with services. You can name a port in pod definition and then use this name to reference the port in service definition yaml (this only applies to targetPort field).
Check whether your images exposes the same port or try to use the same port (see in the images Dockerfile).
I suppose, it is because of your images may be trying to start anything in the same port, so when first one get created it create perfectly but during second container creation it tries to use the same port, and it gets bind: address already in use error.
You can see the pod logs for one of your container (by kubectl logs <pod_name> <container_name>) then you will be clear.
I tried applying your yaml with one of my docker image (which used to start a server in 8080 port), then after applying the below yaml I got the same error as you got.
apiVersion: apps/v1
kind: Deployment
metadata:
name: multi-container-dep
labels:
app: aspnet-core-multi-container-app
spec:
replicas: 1
selector:
matchLabels:
component: multi-container
template:
metadata:
labels:
component: multi-container
spec:
containers:
- name: cmultiapp
image: shahincsejnu/httpapiserver:v1.0.5
imagePullPolicy: Always
ports:
- containerPort: 8080
- name: cmultiapi
image: shahincsejnu/httpapiserver:v1.0.5
imagePullPolicy: Always
ports:
- containerPort: 8081
I saw the first pod's log which ran successfully by kubectl logs pod/multi-container-dep-854c78cfd4-7jd6n cmultiapp and the result is :
int port : :8080
start called
Then I saw the second pod's log which crashed by kubectl logs pod/multi-container-dep-854c78cfd4-7jd6n cmultiapi and seen the below error:
int port : :8080
start called
2021/03/20 13:49:24 listen tcp :8080: bind: address already in use # this is the reason of the error
So, I suppose your images also do something like that.
What works
This below yamls ran successfully both container:
1.
apiVersion: apps/v1
kind: Deployment
metadata:
name: multi-container-dep
labels:
app: aspnet-core-multi-container-app
spec:
replicas: 1
selector:
matchLabels:
component: multi-container
template:
metadata:
labels:
component: multi-container
spec:
containers:
- name: cmultiapp
image: shahincsejnu/httpapiserver:v1.0.5
imagePullPolicy: Always
ports:
- containerPort: 8080
- name: cmultiapi
image: nginx
imagePullPolicy: Always
ports:
- containerPort: 80
apiVersion: apps/v1
kind: Deployment
metadata:
name: multi-container-dep
labels:
app: aspnet-core-multi-container-app
spec:
replicas: 1
selector:
matchLabels:
component: multi-container
template:
metadata:
labels:
component: multi-container
spec:
containers:
- name: cmultiapp
image: shahincsejnu/httpapiserver:v1.0.5
imagePullPolicy: Always
ports:
- containerPort: 80
- name: cmultiapi
image: nginx
imagePullPolicy: Always
ports:
- containerPort: 8081
If you have a docker compose yaml, please use Kompose Tool to convert it into Kubernetes Objects.
Below is the documentation link
https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/
Please use kubectl explain to understand every field of your deployment yaml
As can be seen in below explanation for ports, ports list in deployment yaml is primarily informational.
Since both the containers in the Pod share the same Network Namespace, the processes running inside the containers cannot use the same ports.
kubectl explain deployment.spec.template.spec.containers.ports
KIND: Deployment
VERSION: apps/v1
RESOURCE: ports <[]Object>
DESCRIPTION:
List of ports to expose from the container. Exposing a port here gives the
system additional information about the network connections a container
uses, but is primarily informational. Not specifying a port here DOES NOT
prevent that port from being exposed. Any port which is listening on the
default "0.0.0.0" address inside a container will be accessible from the
network. Cannot be updated.
ContainerPort represents a network port in a single container.
FIELDS:
containerPort <integer> -required-
Number of port to expose on the pod's IP address. This must be a valid port
number, 0 < x < 65536.
hostIP <string>
What host IP to bind the external port to.
hostPort <integer>
Number of port to expose on the host. If specified, this must be a valid
port number, 0 < x < 65536. If HostNetwork is specified, this must match
ContainerPort. Most containers do not need this.
name <string>
If specified, this must be an IANA_SVC_NAME and unique within the pod. Each
named port in a pod must have a unique name. Name for the port that can be
referred to by services.
protocol <string>
Protocol for port. Must be UDP, TCP, or SCTP. Defaults to "TCP".
Please provide the Dockerfile for both images and docker compose files or docker run commands or docker service create commands for the existing multi container docker application for futher help.
I solved this by using environment variables and assigning aspnet url to port 81.
- name: cmultiapi
image: multiapi
imagePullPolicy: Never
ports:
- containerPort: 81
env:
- name: ASPNETCORE_URLS
value: http://+:81
I would also like to mention the url where I got the necessary help. Link is here.
What I want to achieve:
We have an on premise Kafka cluster. I want to set up KSQLDB in OpenShift and connect it to the brokers of the on premise Kafka cluster.
The problem:
When I try to start the KSQLDB server with the command "/usr/bin/ksql-server-start /etc/ksqldb/ksql-server.properties" I get the error message:
[2020-05-14 15:47:48,519] ERROR Failed to start KSQL (io.confluent.ksql.rest.server.KsqlServerMain:60)
io.confluent.ksql.util.KsqlServerException: Could not get Kafka cluster configuration!
at io.confluent.ksql.services.KafkaClusterUtil.getConfig(KafkaClusterUtil.java:90)
at io.confluent.ksql.security.KsqlAuthorizationValidatorFactory.isKafkaAuthorizerEnabled(KsqlAuthorizationValidatorFactory.java:81)
at io.confluent.ksql.security.KsqlAuthorizationValidatorFactory.create(KsqlAuthorizationValidatorFactory.java:51)
at io.confluent.ksql.rest.server.KsqlRestApplication.buildApplication(KsqlRestApplication.java:624)
at io.confluent.ksql.rest.server.KsqlRestApplication.buildApplication(KsqlRestApplication.java:544)
at io.confluent.ksql.rest.server.KsqlServerMain.createExecutable(KsqlServerMain.java:98)
at io.confluent.ksql.rest.server.KsqlServerMain.main(KsqlServerMain.java:56)
Caused by: java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TimeoutException: Call(callName=listNodes, deadlineMs=1589471268517) timed out at 1589471268518 after 1 attempt(s)
at org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)
at org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)
at org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:89)
at org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:260)
at io.confluent.ksql.services.KafkaClusterUtil.getConfig(KafkaClusterUtil.java:60)
... 6 more
Caused by: org.apache.kafka.common.errors.TimeoutException: Call(callName=listNodes, deadlineMs=1589471268517) timed out at 1589471268518 after 1 attempt(s)
Caused by: org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment.
My configuration:
I set up my Dockerfile on the basis of this image: https://hub.docker.com/r/confluentinc/ksqldb-server, the ports 9092, 9093, 8080, 8082 and 443 are open.
My service-yaml looks like that:
kind: Service
apiVersion: v1
metadata:
name: social-media-dev
namespace: abc
selfLink: xyz
uid: xyz
resourceVersion: '1'
creationTimestamp: '2020-05-14T09:47:15Z'
labels:
app: social-media-dev
annotations:
openshift.io/generated-by: OpenShiftNewApp
spec:
ports:
- name: social-media-dev
protocol: TCP
port: 9092
targetPort: 9092
nodePort: 31364
selector:
app: social-media-dev
deploymentconfig: social-media-dev
clusterIP: XX.XX.XXX.XXX
type: LoadBalancer
externalIPs:
- XXX.XX.XXX.XXX
sessionAffinity: None
externalTrafficPolicy: Cluster
status:
loadBalancer:
ingress:
- ip: XX.XX.XXX.XXX
My ksql-server.properties file includes the following information:
listeners: http://0.0.0.0:8082
bootstrap.servers: X.X.X.X:9092, X.X.X.Y:9092, X.X.X.Z:9092
What I have tried so far:
I tried to connect from within my pod to a broker and it worked: (timeout 1 bash -c '</dev/tcp/X.X.X.X/9092 && echo PORT OPEN || echo PORT CLOSED') 2>/dev/null
result: PORT OPEN
I also played around with the listener but then the error message got shorter just with the information "Could not get Kafka cluster configuration!" and without the timeout error.
I tried to exchange LoadBalancer to Nodeport, but also without success.
Do you have any ideas what I could try next?
UPDATE: With an upgrade to Cloudera CDH6, the Cloudera Kafka cluster works now also with Kafka Streams. Hence I was able to connect from my KSQLDB Cluster in Openshift to the on-premise Kafka cluster now.
UPDATE:
With an upgrade to Cloudera CDH6, the Cloudera Kafka cluster works now also with Kafka Streams. Hence I was able to connect from my KSQLDB Cluster in Openshift to the on-premise Kafka cluster now.
I will also describe my final way of connecting to the kerberized Kafka-cluster here as I have been struggling a lot to get it running:
Getting Kerberos-tickets and establish connections via SSL
ksql-server.properties (the sasl_ssl part of it):
security.protocol=SASL_SSL
sasl.mechanism=GSSAPI
ssl.truststore.location=truststore.jks
ssl.truststore.password=password
ssl.truststore.type=JKS
ssl.ca.location=cert
sasl.jaas.config=com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true storeKey=true keyTab="my.keytab" serviceName="kafka" principal="myprincipal";
serviceName="kafka"
producer.ssl.endpoint.identification.algorithm=HTTPS
producer.security.protocol=SASL_SSL
producer.ssl.truststore.location=truststore.jks
producer.ssl.truststore.password=password
producer.sasl.mechanism=GSSAPI
producer.sasl.jaas.config=com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true storeKey=true keyTab="my.keytab" serviceName="kafka" principal="myprincipal";
consumer.ssl.endpoint.identification.algorithm=HTTPS
consumer.security.protocol=SASL_SSL
consumer.ssl.truststore.location=truststore.jks
consumer.ssl.truststore.password=password
consumer.sasl.mechanism=GSSAPI
consumer.sasl.jaas.config=com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true storeKey=true keyTab="my.keytab" serviceName="kafka" principal="myprincipal";`
Set up Sentry rules therefore
HOST=[HOST]->CLUSTER=kafka-cluster->action=idempotentwrite
HOST=[HOST]->TRANSACTIONALID=[ID]->action=describe
HOST=[HOST]->TRANSACTIONALID=[ID]->action=write
I try to remote debug the application in attached mode with host: 192.168.99.100 and port 5005, but it tells me that it is unable to open the debugger port. The IP is 192.268.99.100 (the cluster is hosted locally via minikube).
Output of kubectl describe service catalogservice
Name: catalogservice
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=catalogservice
Type: NodePort
IP: 10.98.238.198
Port: web 31003/TCP
TargetPort: 8080/TCP
NodePort: web 31003/TCP
Endpoints: 172.17.0.6:8080
Port: debug 5005/TCP
TargetPort: 5005/TCP
NodePort: debug 32003/TCP
Endpoints: 172.17.0.6:5005
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
This is the pods service.yml:
apiVersion: v1
kind: Service
metadata:
name: catalogservice
spec:
type: NodePort
selector:
app: catalogservice
ports:
- name: web
protocol: TCP
port: 31003
nodePort: 31003
targetPort: 8080
- name: debug
protocol: TCP
port: 5005
nodePort: 32003
targetPort: 5005
And in here I expose the containers port
spec:
containers:
- name: catalogservice
image: elps/myimage
ports:
- containerPort: 8080
name: app
- containerPort: 5005
name: debug
The way I build the image:
FROM openjdk:11
VOLUME /tmp
EXPOSE 8082
ADD /target/catalogservice-0.0.1-SNAPSHOT.jar catalogservice-0.0.1-SNAPSHOT.jar
ENTRYPOINT ["java", "-agentlib:jdwp=transport=dt_socket,address=5005,server=y,suspend=n", "-jar", "catalogservice-0.0.1-SNAPSHOT.jar"]
When I execute nmap -p 5005 192.168.99.100 I receive
PORT STATE SERVICE
5005/tcp closed avt-profile-2
When I execute nmap -p 32003 192.168.99.100 I receive
PORT STATE SERVICE
32003/tcp closed unknown
When I execute nmap -p 31003 192.168.99.100 I receive
PORT STATE SERVICE
31003/tcp open unknown
When I execute kubectl get services I receive
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
catalogservice NodePort 10.108.195.102 <none> 31003:31003/TCP,5005:32003/TCP 14m
minikube service customerservice --url returns
http://192.168.99.100:32004
As an alternative to using a NodePort in a Service you could also use kubectl port-forward to access the debug port in your Pod.
kubectl port-forward allows using resource name, such as a pod name, to select a matching pod to port forward to since Kubernetes v1.10.
You need to expose the debug port in the Deployment yaml for the Pod
spec:
containers:
...
ports:
...
- containerPort: 5005
Then get the name of your Pod via
kubectl get pods
and then add a port-forwarding to that Pod
kubectl port-forward podname 5005:5005
In IntelliJ you will be able to connect to
Host: localhost
Port: 5005
Alternatively, you can use the Cloud Code Intellij plugin.
Also, if you use Fabric8, it provides the fabric8:debug goal.
There was a slip in the yaml you first posted as:
- containerPort: 5050
name: debug
Should be:
- containerPort: 5005
name: debug
You also need to use the external port of 32003 when configuring the IntelliJ debugger. With those changes it should work.
You may also want to think about how to make it more flexible. In the past when I've done this I've used a different form for the docker start command that allows you to turn remote debug on and off by an environment variable called REMOTE_DEBUG, which for you would be:
CMD if [ "x$REMOTE_DEBUG" = "xfalse" ] ; then java $JAVA_OPTS -jar catalogservice-0.0.1-SNAPSHOT.jar ; else java $JAVA_OPTS -agentlib:jdwp=transport=dt_socket,address=5005,server=y,suspend=n -jar catalogservice-0.0.1-SNAPSHOT.jar ; fi
You'll probably find you want to set the env var $JAVA_OPTS to limit jvm memory use to avoid issues in k8s.
I am new to Prometheus and relatively new to kubernetes so bear with me, please. I am trying to test Prometheus out and have tried two different approaches.
Run Prometheus as a docker container outside of kubernetes. To accomplish this I have created this Dockerfile:
FROM prom/prometheus
ADD prometheus.yml /etc/prometheus/
and this yaml file:
global:
scrape_interval: 15s
external_labels:
monitor: 'codelab-monitor'
scrape_configs:
- job_name: 'kubernetes-apiservers'
scheme: http
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
kubernetes_sd_configs:
- role: endpoints
api_server: localhost:443
When I run this I get:
Failed to list *v1.Pod: Get http://localhost:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 127.0.0.1:443: connect: connection refused"
Failed to list *v1.Service: Get http://localhost:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 127.0.0.1:443: connect: connection refused"
Failed to list *v1.Endpoints: Get http://localhost:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 127.0.0.1:443: connect: connection refused"
on a loop. Prometheus will load when I go to localhost:9090 but there is no data.
I thought deploying Prometheus as a Kubernetes deployment may help, so I made this yaml and deployed it.
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: prometheus-monitor
spec:
selector:
matchLabels:
app: prometheus
template:
metadata:
labels:
app: prometheus
spec:
containers:
- name: prometheus-monitor
image: prom/prometheus
# args:
# - '-config.file=/etc/prometheus/prometheus.yaml'
imagePullPolicy: IfNotPresent
ports:
- name: webui
containerPort: 9090
The deployment was successful, but if I go to localhost:9090 I get 'ERR_SOCKET_NOT_CONNECTED'. (my port is forwarded)
Can anyone tell me the advantage of in vs out of Kubernetes and how to fix at least one of these issues?
Also, my config file is suppressed because it was giving an error, and I will look into that once I am able to get Prometheus loaded.
Kubernetes does not map the port outside it's cluster when you deploy your container.
You also have to create a service (can be inside the same file) to make it available from your workstation (append this to your prometheus yaml):
---
apiVersion: v1
kind: Service
metadata:
name: prometheus-web
labels:
app: prometheus
spec:
type: NodePort
ports:
- port: 9090
protocol: TCP
targetPort: 9090
nodePort: 30090
name: webui
selector:
app: prometheus
NodePort opens the given port on all nodes you have. You should be able to see the frontend with http://localhost:30090/
Per default, kubernetes allow ports 30000 to 32767 for NodePort type (https://kubernetes.io/docs/concepts/services-networking/service/#nodeport).
Please consider reading the documentation in general for more information on services in kubernetes: https://kubernetes.io/docs/concepts/services-networking/service/
So 2 different issues. On:
You are trying to connect to localhost:443 where Prometheus is running and it's expecting to talk to a Kubernetes API server. Apparently, nothing is listening on localhost:443. Are you doing port forwarding to your kube-apiserver?
In this case you need to expose your deployment port. With something like:
kubectl expose deployment prmetheus-web --type=LoadBalancer # or
kubectl expose deployment prmetheus-web --type=NodePort
depending on how you want to expose your service. NodePort exposes it in service that maps to a port on your Kubernetes nodes (IPAddress:Port) and LoadBalancer exposes the deployment using an external load balancer that may vary depending on what cloud you are using (AWS, GCP, OpenStack, Azure, etc). More about exposing your Deployments or DaemonSets or StatefulSets here. More about services here
Hope it helps.