how to access management UI for rabbitmq from minikube? - docker

I have docker-compose file with rabbitmq management image running. I am able to access UI for management.
$ cat docker-compose.yml
---
version: '3.7'
services:
rabbitmq:
image: rabbitmq:management
ports:
- '5672:5672'
- '15672:15672'
environment:
RABBITMQ_DEFAULT_VHOST: storage-collector-dev
RABBITMQ_DEFAULT_USER: dev
RABBITMQ_DEFAULT_PASS: dev
I am trying to convert that to Kubernetes Pods and services.
I am using Mac to run minikube.
Here are my files
$ tree kubernetes/
kubernetes/
└── coreservices
├── rabbitmq_pod.yml
└── rabbitmq_service.yml
$ cat kubernetes/coreservices/rabbitmq_pod.yml
---
apiVersion: v1
kind: Pod
metadata:
name: rabbitmq-pod
labels:
app: rabbitmq
spec:
containers:
- name: rabbitmq-pod
image: rabbitmq:management
ports:
- containerPort: 5672
name: amqp
- containerPort: 15672
name: http
env:
- name: RABBITMQ_DEFAULT_VHOST
value: storage-collector-dev
- name: RABBITMQ_DEFAULT_USER
value: dev
- name: RABBITMQ_DEFAULT_PASS
value: dev
...
$ cat kubernetes/coreservices/rabbitmq_service.yml
---
apiVersion: v1
kind: Service
metadata:
name: rabbitmq
spec:
type: NodePort
selector:
app: rabbitmq
ports:
- port: 5672
targetPort: 5672
name: amqp
- port: 15672
targetPort: 15672
nodePort: 31672
name: http
...
Then I apply these files
$ kubectl apply -f kubernetes/coreservices/
pod/rabbitmq-pod created
service/rabbitmq created
It creates services and pods. I get the IP for minikube to access the management UI for the rabbitmq.
$ minikube IP
127.0.0.1
When I try to access using http://127.0.0.1:31672, it gives no page found an error.

You need to run the command minikube service rabbitmq, and then for getting the URL minikube service rabbitmq --url

Related

How to expose MariaDB in Kubernetes?

I have a Docker container with MariaDB running in Microk8s (running on a single Unix machine).
# Hello World Deployment YAML
apiVersion: apps/v1
kind: Deployment
metadata:
name: mariadb
spec:
selector:
matchLabels:
app: mariadb
template:
metadata:
labels:
app: mariadb
spec:
containers:
- name: mariadb
image: mariadb:latest
env:
- name: MARIADB_ROOT_PASSWORD
value: sa
ports:
- containerPort: 3306
These are the logs:
(...)
2021-09-30 6:09:59 0 [Note] mysqld: ready for connections.
Version: '10.6.4-MariaDB-1:10.6.4+maria~focal' socket: '/run/mysqld/mysqld.sock' port: 3306 mariadb.org binary distribution
Now,
connecting to port 3306 on the machine does not work.
connecting after exposing the pod with a service (any type) on port 8081 also does not work.
How can I get the connection through?
The answer has been written in comments section, but to clarify I am posting here solution as Community Wiki.
In this case problem with connection has been resolved by setting spec.selector.
The .spec.selector field defines how the Deployment finds which Pods to manage. In this case, you select a label that is defined in the Pod template (app: nginx).
.spec.selector is a required field that specifies a label selector for the Pods targeted by this Deployment.
You need to use the service with proper label
example service
apiVersion: v1
kind: Service
metadata:
name: mariadb
spec:
selector:
name: mariadb
ports:
- protocol: TCP
port: 3306
targetPort: 3306
type: ClusterIP
you can use the service name to connect or else change the service type as LoadBalancer to expose it with IP.
apiVersion: v1
kind: Service
metadata:
name: mariadb
spec:
selector:
name: mariadb
ports:
- protocol: TCP
port: 3306
targetPort: 3306
type: LoadBalancer

Kubernetes - minikube service connection timeout

I have a Docker environment with 3 containers: nginx, PHP with Laravel and a MySQL database. It works fine, and I'm now trying to learn Kubernetes.
I was hoping to create a deployment and a service just for the nginx container to make it simple to start with:
Here is the deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: toolkit-app-deployment
spec:
replicas: 1
selector:
matchLabels:
container: toolkit-server
template:
metadata:
labels:
container: toolkit-server
spec:
containers:
- name: toolkit-server
image: my/toolkit-server:test
ports:
- containerPort: 8000
imagePullSecrets:
- name: my-cred
Here is the service.yaml:
apiVersion: v1
kind: Service
metadata:
name: backend
spec:
selector:
container: toolkit-server
ports:
- protocol: "TCP"
port: 80
targetPort: 8000
type: LoadBalancer
And just incase it's needed, here is the nginx part of the docker-compose.yaml:
version: "3.8"
services:
server:
build:
context: .
dockerfile: dockerfiles/nginx.dockerfile
ports:
- "8000:80"
volumes:
- ./src:/var/www/html
- ./nginx/nginx.conf:/etc/nginx/conf.d/default.conf:ro
container_name: toolkit-server
The deployment is created successfully and I can see that 1/1 pods are running.
However when I run minikube service backend, the URL I get just times out.
I was expecting to see some sort of nginx page, maybe an nginx error - but with a time out I'm not sure what the next step is.
I'm brand new to Kubernetes to theres a good chance I've messed the ports up or something basic. Any help appreciated.
Edit:
As advised by #david-maze I changed the following in the deployment.yaml:
ports:
- containerPort: 80
And the following in service.yaml:
targetPort: 80
This gave me an nginx error page when viewed in the browser, as expected, but crucially no longer times out.
This is a community wiki answer posted for better visibility. Feel free to expand it.
As discussed in the comments the issue was due to wrong port configuration.
TargetPort is the port on which the service will send requests to, that your pod will be listening on. Your application in the container will need to be listening on this port also.
ContainerPort defines the port on which app can be reached out inside the container.
So in your use case the deployment should have:
ports:
- containerPort: 80
and the service:
targetPort: 80
This will make the connection not to timeout anymore.

How do I set these docker-compose ports in a kubernetes yaml file?

Given the following ports defined in a docker-compose.yml file, how do I do the equivalent in a kubernetes yml file?
docker-compose.yml
seq.logging:
image: datalust/seq
networks:
- backend
container_name: seq.logging
environment:
- ACCEPT_EULA=Y
ports:
- "5300:80" # UI
- "5301:5341" # Data ingest
kubernetes.yml
---
apiVersion: v1
kind: Pod
metadata:
name: backend-infrastructure
labels:
system: backend
app: infrastructure
spec:
containers:
- name: seq-logging
image: datalust/seq
# ports: ?????????????????????????????????????
# - containerPort: "5300:80" # UI
# - containerPort: "5301:5341" # Data ingest
env:
- name: ACCEPT_EULA
value: "Y"
You do not expose a port using Pod/deployment yaml.
Services are the way to do it. Here you can either use multiple services on top of your pod/deployment but this will result in multiple IP addresses. Other way is to name each port and then create a multi port service definition.
In your case it should look somewhat like this (note this is just a quickly written example). Also
When using multiple ports you must give all of your ports names, so
that endpoints can be disambiguated.
apiVersion: v1
kind: Pod
metadata:
name: backend-infrastructure
labels:
system: backend
app: infrastructure
spec:
containers:
- name: seq-logging
image: datalust/seq
ports:
- containerPort: 80 # UI
name: ui
- containerPort: 5341 # Data ingest
name: data-ingest
env:
- name: ACCEPT_EULA
value: "Y"
---
apiVersion: v1
kind: Service
metadata:
name: seq-logging-service
spec:
type: #service type
ports:
- name: ui
port: 5300
targetPort: 80
- name: data-ingest
port: 5301
targetPort: 5341
Some more resources:
- Docs about connecting applications with services.
- example yaml from the above featuring deployment with multiple port container and corresponding service.
Update:
containerPort
List of ports to expose from the container. Exposing a port here gives
the system additional information about the network connections a
container uses, but is primarily informational. Not specifying a port
here DOES NOT prevent that port from being exposed. Any port which is
listening on the default "0.0.0.0" address inside a container will be
accessible from the network. Cannot be updated.

How to configure multiple services/containers in Kubernetes?

I am new to Docker and Kubernetes.
Technologies used:
Dotnet Core 2.2
Asp.NET Core WebAPI 2.2
Docker for windows(Edge) with Kubernetes support enabled
Code
I am having two services hosted into two docker containers container1 and container2.
Below is my deploy.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: webapi-dockerkube
spec:
replicas: 1
template:
metadata:
labels:
app: webapi-dockerkube
spec:
containers:
- name: webapi-dockerkube
image: "webapidocker:latest"
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
livenessProbe:
httpGet:
path: /api/values
port: 80
readinessProbe:
httpGet:
path: /api/values
port: 80
- name: webapi-dockerkube2
image: "webapidocker2:latest"
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
livenessProbe:
httpGet:
path: /api/other/values
port: 80
readinessProbe:
httpGet:
path: /api/other/values
port: 80
When I am running command:
kubectl create -f .\deploy.yaml
I am getting status as CrashLoopBackOff.
But same is running fine when i have only one container configured.
When checking logs I am getting following error:
Error from server (BadRequest): a container name must be specified for pod webapi-dockerkube-8658586998-9f8mk, choose one of: [webapi-dockerkube webapi-dockerkube2]
You are running two containers in the same pod which bind both to port 80. This is not possible within the same pod.
Think of a pod like a 'server' and you can't have two processes bind to the same port.
Solution in your situation: Use different ports inside the pod or use separate pods. From your deployment there seems to be no shared resources like filesystem, so it would be easy to split the containers to separate pods.
Note that it will not suffice to change the pod definition if you want to have both containers running in the same pod with different ports. The application in the container must bind to a different port as well.
apiVersion: v1
kind: Pod
metadata:
name: two-containers
spec:
restartPolicy: Never
volumes:
- name: shared-data
emptyDir: {}
containers:
- name: nginx-container
image: nginx
volumeMounts:
- name: shared-data
mountPath: /usr/share/nginx/html
- name: debian-container
image: debian
volumeMounts:
- name: shared-data
mountPath: /pod-data
command: ["/bin/sh"]
args: ["-c", "echo Hello from the debian container > /pod-data/index.html"]
here sharing example for multi container you can use this template
Also you can check for logs of using
Kubectl logs
Check reason for crashloop back

kubernetes redirecting outgoing http traffic from the service to localhost:port

I have a chart in it two containers:
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: catalog
labels:
app: catalog
chart: catalog-0.1.0
heritage: Tiller
spec:
replicas: 1
selector:
matchLabels:
app: catalog
template:
metadata:
labels:
app: catalog
spec:
containers:
- name: catalog
image: catalog:v1
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 8080
protocol: TCP
- name: myproxy
image: myproxy:v1
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 8008
protocol: TCP
env:
- name: PROXY_PORT
value: '8080'
---
apiVersion: v1
kind: Service
metadata:
name: catalog
labels:
app: catalog
chart: catalog-0.1.0
heritage: Tiller
spec:
type: NodePort
ports:
- port: 8008
targetPort: http
protocol: TCP
name: http
selector:
app: catalog
I need to redirect all outbound traffic from the catalog container to myproxy container by localhost.
And already in the container to determine whether the catalog can send requests, log them, etc.
Prompt please whether it is possible to implement it using kubernetes.
Thanks.
Update:
The problem is that I can not change the code in the catalg container and send queries to localhost
The container also does not have iptables to do something like this
containers:
- name: catalog
image: catalog:v1
imagePullPolicy: IfNotPresent
command:
- 'iptables -t nat -A OUTPUT -p tcp --dport 8080 -j DNAT --to-destination 127.0.0.1:8008'
ports:
- name: http
containerPort: 8080
protocol: TCP
Ideally done with kubernetes
If catalog application respects http_proxy environment variable, it it easy. Just add an environment variable to catalog container.
- name: catalog
image: catalog:v1
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 8080
protocol: TCP
env:
- name: HTTP_PROXY
value: localhost:8008
For your update, if you need to manipulate iptables, you can add another initContainer, for example:
initContainers:
- image: centos
imagePullPolicy: Always
name: run-iptables
securityContext:
privileged: true
command:
- "sh"
- "-c"
- 'yum -y install iptables; iptables -t nat -A OUTPUT -p tcp --dport 8080 -j DNAT --to-destination 127.0.0.1:8008'
Since all containers in a pod share the same net namespace, it effects to catalog container as well.

Resources