I am a newbie in frontend/backend/DevOps. But I am in need of using Kubernetes to deploy an app on Google Cloud Platform (GCP) to provide a service. Then I start learning by following this series of tutorials:
https://mickeyabhi1999.medium.com/build-and-deploy-a-web-app-with-react-flask-nginx-postgresql-docker-and-google-kubernetes-e586de159a4d
https://medium.com/swlh/build-and-deploy-a-web-app-with-react-flask-nginx-postgresql-docker-and-google-kubernetes-341f3b4de322
And the code of this tutorial series is here: https://github.com/abhiChakra/Addition-App
Everything was fine until the last step: using "gcloud builds submit ..." to build
nginx+react service
flask+wsgi service
nginx+react deployment
flask+wsgi deployment
on a GCP cluster.
1.~3. went well and the status of them are "OK". But the status of flask+wsgi deployment was "Does not have minimum availability" even after many times of restarting.
I used "kubectl get pods" and saw the status of the flask pod was "CrashLoopBackOff".
Then I followed the processes of debugging suggested here:
https://containersolutions.github.io/runbooks/posts/kubernetes/crashloopbackoff/
I used "kubectl describe pod flask" to look into the problem of the flask pod. Then I found the "Exit Code" was 139 and there were messages "Liveness probe failed: Get "http://10.24.0.25:8000/health": read tcp 10.24.0.1:55470->10.24.0.25:8000: read: connection reset by peer" and "Readiness probe failed: Get "http://10.24.0.25:8000/ready": read tcp 10.24.0.1:55848->10.24.0.25:8000: read: connection reset by peer".
The complete log:
Name: flask-676d5dd999-cf6kt
Namespace: default
Priority: 0
Node: gke-addition-app-default-pool-89aab4fe-3l1q/10.140.0.3
Start Time: Thu, 11 Nov 2021 19:06:24 +0800
Labels: app.kubernetes.io/managed-by=gcp-cloud-build-deploy
component=flask
pod-template-hash=676d5dd999
Annotations: <none>
Status: Running
IP: 10.24.0.25
IPs:
IP: 10.24.0.25
Controlled By: ReplicaSet/flask-676d5dd999
Containers:
flask:
Container ID: containerd://5459b747e1d44046d283a46ec1eebb625be4df712340ff9cf492d5583a4d41d2
Image: gcr.io/peerless-garage-330917/addition-app-flask:latest
Image ID: gcr.io/peerless-garage-330917/addition-app-flask#sha256:b45d25ffa8a0939825e31dec1a6dfe84f05aaf4a2e9e43d35084783edc76f0de
Port: 8000/TCP
Host Port: 0/TCP
State: Running
Started: Fri, 12 Nov 2021 17:24:14 +0800
Last State: Terminated
Reason: Error
Exit Code: 139
Started: Fri, 12 Nov 2021 17:17:06 +0800
Finished: Fri, 12 Nov 2021 17:19:06 +0800
Ready: False
Restart Count: 222
Limits:
cpu: 1
Requests:
cpu: 400m
Liveness: http-get http://:8000/health delay=120s timeout=1s period=5s #success=1 #failure=3
Readiness: http-get http://:8000/ready delay=120s timeout=1s period=5s #success=1 #failure=3
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-s97x5 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-s97x5:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-s97x5
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning Unhealthy 9m7s (x217 over 21h) kubelet (combined from similar events): Liveness probe failed: Get "http://10.24.0.25:8000/health": read tcp 10.24.0.1:48636->10.24.0.25:8000: read: connection reset by peer
Warning BackOff 4m38s (x4404 over 22h) kubelet Back-off restarting failed container
Following the suggestion here:
https://containersolutions.github.io/runbooks/posts/kubernetes/crashloopbackoff/#step-4
I had increased the "initialDelaySeconds" to 120, but it still failed.
Because I made sure that everything worked fine on my local laptop, so I think there could be some connection or authentication issue.
To be more detailed, the deployment.yaml looks like:
apiVersion: v1
kind: Service
metadata:
name: ui
spec:
type: LoadBalancer
selector:
app: react
tier: ui
ports:
- port: 8080
targetPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: flask
spec:
type: ClusterIP
selector:
component: flask
ports:
- port: 8000
targetPort: 8000
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: flask
spec:
replicas: 1
selector:
matchLabels:
component: flask
template:
metadata:
labels:
component: flask
spec:
containers:
- name: flask
image: gcr.io/peerless-garage-330917/addition-app-flask:latest
imagePullPolicy: "Always"
resources:
limits:
cpu: "1000m"
requests:
cpu: "400m"
livenessProbe:
httpGet:
path: /health
port: 8000
initialDelaySeconds: 30
periodSeconds: 5
readinessProbe:
httpGet:
path: /ready
port: 8000
initialDelaySeconds: 30
periodSeconds: 5
ports:
- containerPort: 8000
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: ui
spec:
replicas: 1
selector:
matchLabels:
app: react
tier: ui
template:
metadata:
labels:
app: react
tier: ui
spec:
containers:
- name: ui
image: gcr.io/peerless-garage-330917/addition-app-nginx:latest
imagePullPolicy: "Always"
resources:
limits:
cpu: "1000m"
requests:
cpu: "400m"
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 30
periodSeconds: 5
readinessProbe:
httpGet:
path: /ready
port: 8080
initialDelaySeconds: 30
periodSeconds: 5
ports:
- containerPort: 8080
docker-compose.yaml:
# we will be creating these services
services:
flask:
# Note that we are building from our current terminal directory where our Dockerfile is located, we use .
build: .
# naming our resulting container
container_name: flask
# publishing a port so that external services requesting port 8000 on your local machine
# are mapped to port 8000 on our container
ports:
- "8000:8000"
nginx:
# Since our Dockerfile for web-server is located in react-app foler, our build context is ./react-app
build: ./react-app
container_name: nginx
ports:
- "8080:8080"
Nginx Dockerfile:
# first building react project, using node base image
FROM node:10 as build-stage
# setting working dir inside container
WORKDIR /react-app
# required to install packages
COPY package*.json ./
# installing npm packages
RUN npm install
# copying over react source material
COPY src ./src
# copying over further react material
COPY public ./public
# copying over our nginx config file
COPY addition_container_server.conf ./
# creating production build to serve through nginx
RUN npm run build
# starting second, nginx build-stage
FROM nginx:1.15
# removing default nginx config file
RUN rm /etc/nginx/conf.d/default.conf
# copying our nginx config
COPY --from=build-stage /react-app/addition_container_server.conf /etc/nginx/conf.d/
# copying production build from last stage to serve through nginx
COPY --from=build-stage /react-app/build/ /usr/share/nginx/html
# exposing port 8080 on container
EXPOSE 8080
CMD ["nginx", "-g", "daemon off;"]
Nginx server config:
server {
listen 8080;
# location of react build files
root /usr/share/nginx/html/;
# index html from react build to serve
index index.html;
# ONLY KUBERNETES RELEVANT: endpoint for health checkup
location /health {
return 200 "health ok";
}
# ONLY KUBERNETES RELEVANT: endpoint for readiness checkup
location /ready {
return 200 "ready";
}
# html file to serve with / endpoint
location / {
try_files $uri /index.html;
}
# proxing under /api endpoint
location /api {
client_max_body_size 10m;
add_header 'Access-Control-Allow-Origin' http://<NGINX_SERVICE_ENDPOINT>:8080;
proxy_pass http://flask:8000/;
}
}
There are two important functions in App.js:
...
insertCalculation(event, calculation){
/*
Making a POST request via a fetch call to Flask API with numbers of a
calculation we want to insert into DB. Making fetch call to web server
IP with /api/insert_nums which will be reverse proxied via Nginx to the
Application (Flask) server.
*/
event.preventDefault();
fetch('http://<NGINX_SERVICE_ENDPOINT>:8080/api/insert_nums', {method: 'POST',
mode: 'cors',
headers: {
'Content-Type' : 'application/json'
},
body: JSON.stringify(calculation)}
).then((response) => {
...
getHistory(event){
/*
Making a GET request via a fetch call to Flask API to retrieve calculations history.
*/
event.preventDefault()
fetch('http://<NGINX_SERVICE_ENDPOINT>:8080/api/data', {method: 'GET',
mode: 'cors'
}
).then(response => {
...
Flask Dockerfile:
# using base image
FROM python:3.8
# setting working dir inside container
WORKDIR /addition_app_flask
# adding run.py to workdir
ADD run.py .
# adding config.ini to workdir
ADD config.ini .
# adding requirements.txt to workdir
ADD requirements.txt .
# installing flask requirements
RUN pip install -r requirements.txt
# adding in all contents from flask_app folder into a new flask_app folder
ADD ./flask_app ./flask_app
# exposing port 8000 on container
EXPOSE 8000
# serving flask backend through uWSGI server
CMD [ "python", "run.py" ]
run.py:
from gevent.pywsgi import WSGIServer
from flask_app.app import app
# As flask is not a production suitable server, we use will
# a WSGIServer instance to serve our flask application.
if __name__ == '__main__':
WSGIServer(('0.0.0.0', 8000), app).serve_forever()
app.py:
from flask import Flask, request, jsonify
from flask_app.storage import insert_calculation, get_calculations
app = Flask(__name__)
#app.route('/')
def index():
return "My Addition App", 200
#app.route('/health')
def health():
return '', 200
#app.route('/ready')
def ready():
return '', 200
#app.route('/data', methods=['GET'])
def data():
'''
Function used to get calculations history
from Postgres database and return to fetch call in frontend.
:return: Json format of either collected calculations or error message
'''
calculations_history = []
try:
calculations = get_calculations()
for key, value in calculations.items():
calculations_history.append(value)
return jsonify({'calculations': calculations_history}), 200
except:
return jsonify({'error': 'error fetching calculations history'}), 500
#app.route('/insert_nums', methods=['POST'])
def insert_nums():
'''
Function used to insert a calculation into our postgres
DB. Operands of operation received from frontend.
:return: Json format of either success or failure response.
'''
insert_nums = request.get_json()
firstNum, secondNum, answer = insert_nums['firstNum'], insert_nums['secondNum'], insert_nums['answer']
try:
insert_calculation(firstNum, secondNum, answer)
return jsonify({'Response': 'Successfully inserted into DB'}), 200
except:
return jsonify({'Response': 'Unable to insert into DB'}), 500
I can't tell what is going wrong. And I also wonder what should be the better way to debug such a cloud deployment case? Because in normal programs, we can set some breakpoints and print or log something to examine the root location of code that causes the problem, in cloud deployment, however, I lost my direction of debugging.
...Exit Code was 139...
This could mean there's a bug in your Flask app. You can start with minimum spec instead of trying to do all in one goal:
apiVersion: v1
kind: Pod
metadata:
name: flask
labels:
component: flask
spec:
containers:
- name: flask
image: gcr.io/peerless-garage-330917/addition-app-flask:latest
ports:
- containerPort: 8000
See if your pod start accordingly. If it does, try connect to it kubectl port-forward <flask pod name> 8000:8000, follow by curl localhost:8000/health. You should watch your application at all time kubectl logs -f <flask pod name>.
Thanks for #gohm'c response! It is a good suggestion to isolate different parts and start from a smaller component. As suggested, I tried deploying a single flask pod first. Then I used
kubectl port-forward flask 8000:8000
to map the port to local machine. After using
curl localhost:8000/health
to access the port, it showed
Forwarding from 127.0.0.1:8000 -> 8000
Forwarding from [::1]:8000 -> 8000
Handling connection for 8000
E1112 18:52:15.874759 300145 portforward.go:400] an error occurred forwarding 8000 -> 8000: error forwarding port 8000 to pod 4870b939f3224f968fd5afa4660a5af7d10e144ee85149d69acff46a772e94b1, uid : failed to execute portforward in network namespace "/var/run/netns/cni-32f718f0-1248-6da4-c726-b2a5bf1918db": read tcp4 127.0.0.1:38662->127.0.0.1:8000: read: connection reset by peer
At this moment, using
kubectl logs -f flask
returned empty response.
So there is indeed some issues in the flask app.
This health probing is a really simple function in app.py:
#app.route('/health')
def health():
return '', 200
How can I know if the route setting is wrong or not?
Is it because of the WSGIServer in run.py?
from gevent.pywsgi import WSGIServer
from flask_app.app import app
# As flask is not a production suitable server, we use will
# a WSGIServer instance to serve our flask application.
if __name__ == '__main__':
WSGIServer(('0.0.0.0', 8000), app).serve_forever()
If we look at Dockerfile, it seems it exposes the correct port 8000.
If I directly run
python run.py
on my laptop, I can successfully access localhost:8000 .
How can I debug with this kind of problem?
Related
I have created Docker that has debian + python-django that runs on 8000 port. But after deploying into azure-aks, url path is not working under 8000 port. Keeping important detials below.
Step 1:
Dockerfile :
EXPOSE 8000
RUN /usr/local/bin/python3 manage.py migrate
CMD [ "python3", "manage.py", "runserver", "0.0.0.0:8000" ]
Step 2:
After building docker image, pushing it to azure registry.
Step 3:
myfile.yaml : this is to deploy azure registry file into aks cluster.
apiVersion: apps/v1
kind: Deployment
metadata:
name: myops
spec:
replicas: 1
selector:
matchLabels:
app: myops
template:
metadata:
labels:
app: myops
spec:
containers:
- name: myops
image: quantumregistry.azurecr.io/myops:v1.0
ports:
- containerPort: 8000
---
# [START service]
apiVersion: v1
kind: Service
metadata:
name: myops-python
spec:
type: LoadBalancer
ports:
- port: 8000
targetPort: 8888
selector:
app: myops
# [END service]
Deploy into aks : kubectl apply -f myops.yaml
Step 4: check sevice
kubectl get service myops-python --watch
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
myops-python LoadBalancer <cluster-ip> <external-ip> 8000:30778/TCP 37m
Note: i have masked IP to not to expose to public.
step 5: i see container is running alright
kubectl get pods
NAME READY STATUS RESTARTS AGE
myops-5bbd459745-cz2vc 1/1 Running 0 19m
step 6: I see container log and it shows that python is running under host 0.0.0.0:8000 port.
kubectl logs -f myops-5bbd459745-cz2vc
Watching for file changes with StatReloader
Performing system checks...
WARNING:param.main: pandas could not register all extension types imports failed with the following error: cannot import name 'ABCIndexClass' from 'pandas.core.dtypes.generic' (/usr/local/lib/python3.9/site-packages/pandas/core/dtypes/generic.py)
System check identified no issues (0 silenced).
September 19, 2021 - 06:47:57
Django version 3.2.5, using settings 'myops_project.settings'
Starting development server at http://0.0.0.0:8000/
Quit the server with CONTROL-C.
The issue is that when I open this in browser http://:8000/myops_app, it is not working and timing out.
The Service myops-python is set up to receive requests on port 8000 but then it will send the request to the pod on target port 8888.
ports:
- port: 8000
targetPort: 8888
The container myops in the Pod myops, however, is not listening on port 8888. Rather it is listening on port 8000.
Dockerfile:
EXPOSE 8000
RUN /usr/local/bin/python3 manage.py migrate CMD [ "python3", "manage.py", "runserver", "0.0.0.0:8000" ]
Please set spec.ports[0].targetPort to 8000 manually or remove targetPort from spec.ports[0] in the Service myops-python. By default and for convenience, the targetPort is set to the same value as the port field. For more information please see Defining a Service.
Tip: You can use kubectl edit service <service-name> -n <namepsace> to edit your Service manifest.
I'm trying a simple microservices app on a cloud Kubernetes cluster. This is the Ingress yaml:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-nginx-nginx-ingress
namespace: default
annotations:
kubernetes.io/ingress.class: nginx
spec:
defaultBackend:
service:
name: auth-svc
port:
number: 5000
rules:
- host: "somehostname.xyz"
http:
paths:
- path: "/"
pathType: Prefix
backend:
service:
name: auth-svc
port:
number: 5000
The problem:
When I use this URL, I'm able to access the auth service: http://somehostname.xyz:31840. However, if I use http://somehostname.xyz, I get a "This site can’t be reached somehostname.xyz refused to connect." error.
The auth service sends GET requests to other services too, and I'm able to see the response from those services if I use:
http://somehostname.xyz:31840/go or http://somehostname.xyz:31840/express. But again, these work only if the nodeport 31840 is used.
My questions:
What typically causes such a problem, where I can access the service
using the hostname and nodeport, but it won't work without supplying the
nodeport?
Is there a method to test this in a different way to figure out where
the problem is?
Is it a problem with the Ingress or Auth namespace? Is it a problem
with the hostname in Flask? Is it a problem with the Ingress
controller? How do I debug this?
These are the results of kubectl get all and other commands.
NAME READY STATUS RESTARTS
pod/auth-flask-58ccd5c94c-g257t 1/1 Running 0
pod/ingress-nginx-nginx-ingress-6677d54459-gtr42 1/1 Running 0
NAME TYPE EXTERNAL-IP PORT(S)
service/auth-svc ClusterIP <none> 5000/TCP
service/ingress-nginx-nginx-ingress LoadBalancer 172.xxx.xx.130 80:31840/TCP,443:30550/TCP
NAME READY UP-TO-DATE AVAILABLE
deployment.apps/auth-flask 1/1 1 1
deployment.apps/ingress-nginx-nginx-ingress 1/1 1 1
NAME DESIRED CURRENT READY
replicaset.apps/auth-flask-58ccd5c94c 1 1 1
replicaset.apps/ingress-nginx-nginx-ingress-6677d54459 1 1 1
NAME CLASS HOSTS ADDRESS PORTS
ingress-nginx-nginx-ingress <none> somehostname.xyz 172.xxx.xx.130 80
Describing ingress also seems normal.
kubectl describe ingress ingress-nginx-nginx-ingress
Name: ingress-nginx-nginx-ingress
Namespace: default
Address: 172.xxx.xx.130
Default backend: auth-svc:5000 (10.x.xx.xxx:5000)
Rules:
Host Path Backends
---- ---- --------
somehostname.xyz
/ auth-svc:5000 (10.x.xx.xxx:5000)
Annotations: kubernetes.io/ingress.class: nginx
This is the code of Auth.
import requests
from flask import Flask
app = Flask(__name__)
#app.route('/')
def indexPage():
return ' <!DOCTYPE html><html><head><meta charset="UTF-8" />\
<title>Microservice</title></head> \
<body><div style="text-align: center;">Welcome to the Auth page</div></body></html>'
#app.route('/go')
def getGoJson():
return requests.get('http://analytics-svc:8082/info').content
#app.route('/express')
def getNodeResponse():
return requests.get('http://node-svc:8085/express').content
if __name__ == '__main__':
app.run(debug=True, host="0.0.0.0")
and Auth's Dockerfile:
FROM python:3.8-slim-buster
WORKDIR /usr/src/app
ENV FLASK_APP=app.py
ENV FLASK_RUN_HOST=0.0.0.0
ENV FLASK_ENV=development
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
COPY . .
CMD ["flask", "run"]
The part of docker-compose yaml for auth:
version: "3.3"
services:
auth:
build: ./auth/
image: nav9/auth-flask:v1
ports:
- "5000:5000"
Auth's Kubernetes manifest:
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-flask
spec:
selector:
matchLabels:
any-name: auth-flask
template:
metadata:
labels:
any-name: auth-flask
spec:
containers:
- name: auth-name
image: nav9/auth-flask:v1
imagePullPolicy: Always
ports:
- containerPort: 5000
---
apiVersion: v1
kind: Service
metadata:
name: auth-svc
spec:
# type: ClusterIP
ports:
- targetPort: 5000
port: 5000
selector:
any-name: auth-flask
What typically causes such a problem, where I can access the service using the hostname and nodeport, but it won't work without supplying the nodeport?
If the URL works when using the nodeport and not without the nodeport, then this means that the ingress is not configured properly for what you want to do.
Is there a method to test this in a different way to figure out where the problem is?
Steps for troubleshooting are:
The first step is determine if the error is from the ingress or from your back-end service.
In your case, the error This site can’t be reached somehostname.xyz refused to connect, sounds like the Ingress found the service to map to and used port 5000 to connect to it, and the connection was refused or nothing was listening on port 5000 for that service.
I'd next look at the auth-svc logs to see that that request came into the system and why it was refused.
My guess is that the auth service is listening on port 31840 but your ingress says to connect to port 5000 based on the configuration.
You might try adding a port mapping from 80 to 31840 as a hack/test to see if you get a different error.
Something like:
spec:
rules:
- host: "somehostname.xyz"
http:
paths:
- path: "/"
backend:
service:
port:
number: 31840
I've only included the part needed to show the indentation properly.
So the other way to test this out is to create additional URLs that map to different ports, so for example:
/try1 => auth-svc:5000
/try2 => auth-svc:31840
/try3 => auth-svc:443
The other part that I haven't played with that might be an issue is that you are using http and I don't know of any auth service that would use http, so simply trying to connect using http to an app that wants https will get a connection either refused or a strange error, so that might be related to the problem/error you are seeing.
Hope this gives you some ideas to try.
The solution has three parts:
Use kubectl get all to find out the running ingress service:
NAME TYPE EXTERNAL-IP PORT(S)
service/ingress-nginx-nginx-ingress LoadBalancer 172.xxx.xx.130 80:31840/TCP,443:30550/TCP
Copy the EXTERNAL-IP of the service (in this case 172.xxx.xx.130).
Add a DNS A record named *.somehostname.xyz for the cloud cluster, and use the IP address 172.xxx.xx.130.
When accessing the hostname via the browser, make sure that http is used instead of https.
I've gone through a fair few stackoverflow posts, none are working... So here's my issue:
I've got a simple node app running on broadcast 0.0.0.0 at port 5000, it's got a simple single endpoint at /.
I've got two k8s objects, here's my Deployment object:
### pf deployment
apiVersion: apps/v1
kind: Deployment
metadata:
# Unique key of the Deployment instance
name: pf-deployment
spec:
# 3 Pods should exist at all times.
replicas: 1
selector:
matchLabels:
app: public-facing
template:
metadata:
labels:
# Apply this label to pods and default
# the Deployment label selector to this value
app: public-facing
spec:
containers:
- name: public-facing
# Run this image
image: pf:ale8k
ports:
- containerPort: 5000
Next, here is my Service object:
### pf service
apiVersion: v1
kind: Service
metadata:
name: pf-service
labels:
run: pf-service-label
spec:
type: NodePort ### may be ommited as it is a default type
selector:
name: public-facing ### should match your labels defined for your angular pods
ports:
- protocol: TCP
targetPort: 5000 ### port your app listens on
port: 5000 ### port on which you want to expose it within your cluster
Finally, a very simple dockerfile:
### generic docker file
FROM node:12
WORKDIR /usr/src/app
COPY . .
RUN npm i
EXPOSE 5000
CMD ["npm", "run", "start"]
I have my image in the minikubes local docker registry, so that's not the issue...
When I try:
curl $(minikube service pf-service --url)
I get:
curl: (7) Failed to connect to 192.168.99.101 port 31753: Connection refused
When I try:
minikube service pf-service
I get a little further output:
Most likely you need to configure your SUID sandbox correctly
I have the hello-minikube image running, this works perfectly fine. So I presume it isn't my nacl?
I'm very new to kubernetes, so apologies in advance if it's very simple.
Thanks!
Service has got selector name: public-facing but pod has got label app: public-facing. They need to be same for Endpoints of the service to be populated with pod IPs.
If you execute below command
kubectl describe svc pf-service
You will see that Endpoints has got no IPs which is the cause of connection refused error.
Change the selector in service as below to make it work.
### pf service
apiVersion: v1
kind: Service
metadata:
name: pf-service
labels:
run: pf-service-label
spec:
type: NodePort ### may be ommited as it is a default type
selector:
app: public-facing ### should match your labels defined for your angular pods
ports:
- protocol: TCP
targetPort: 5000 ### port your app listens on
port: 5000 ### port on which you want to expose it within your cluster
I am trying a very simple tutorial explaining how convert docker-compose to minishift ( Minishift and Kompose. I tried to converted and push the docker-compose.yml example
version: "2"
services:
redis-master:
image: k8s.gcr.io/redis:e2e
ports:
- "6379"
redis-slave:
image: gcr.io/google_samples/gb-redisslave:v1
ports:
- "6379"
environment:
- GET_HOSTS_FROM=dns
frontend:
image: gcr.io/google-samples/gb-frontend:v4
ports:
- "80:80"
environment:
- GET_HOSTS_FROM=dns
labels:
kompose.service.type: LoadBalancer
I successfully compose and push as I can see from these logs:
C:\Users\Cast\docker-compose-to-minishift>kompose-windows-amd64 up --provider=openshift
[36mINFO[0m We are going to create OpenShift DeploymentConfigs, Services and PersistentVolumeClaims for your Dockerized application.
If you need different kind of resources, use the 'kompose convert' and 'oc create -f' commands instead.
[36mINFO[0m Deploying application in "myproject" namespace
[36mINFO[0m Successfully created Service: frontend
[36mINFO[0m Successfully created Service: redis-master
[36mINFO[0m Successfully created Service: redis-slave
[36mINFO[0m Successfully created DeploymentConfig: frontend
[36mINFO[0m Successfully created ImageStream: frontend
[36mINFO[0m Successfully created DeploymentConfig: redis-master
[36mINFO[0m Successfully created ImageStream: redis-master
[36mINFO[0m Successfully created DeploymentConfig: redis-slave
[36mINFO[0m Successfully created ImageStream: redis-slave
Your application has been deployed to OpenShift. You can run 'oc get dc,svc,is,pvc' for details.
C:\Users\Cast\docker-compose-to-minishift>oc get dc
NAME REVISION DESIRED CURRENT TRIGGERED BY
frontend 0 1 0 config,image(frontend:v4)
redis-master 1 1 1 config,image(redis-master:e2e)
redis-slave 1 1 1 config,image(redis-slave:v1)
Nevertheless, I couldn't reach the web application and looking at the logs I found "The container frontend is crashing frequently. It must wait before it will be restarted again" and clicking in details:
AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 172.17.0.13. Set the 'ServerName' directive globally to suppress this message
(13)Permission denied: AH00072: make_sock: could not bind to address [::]:80
(13)Permission denied: AH00072: make_sock: could not bind to address 0.0.0.0:80
no listening sockets available, shutting down
AH00015: Unable to open logs
Searching around I found someone suggestion to change from port 80 to some not root privileged port (eg. 8080). So I changed it in my docker-compose, deleted manualy the namespace myproject, recreated it in OpenShift Web Console and I tried to run once again. Exactly same exception with same message.
In case it is relevant, I have another cmd window with
C:\Users\Cast\docker-compose-to-minishift>kubectl proxy --port=8080
Starting to serve on 127.0.0.1:8080
I am quite begginer on moving from docker-compose to minishift (first time using Kompose tool to be honest).
My main question: why I still get same issue if I have alread changed the ports from 80:80 to 8080:8080 in docker-compose
frontend:
image: gcr.io/google-samples/gb-frontend:v4
ports:
- "8080:8080"
Secondary question: what I have to check to see why I can't start the pront-end service? It is quite limited the exception provided.
*** edited
converted docker-compose by kompose (only front-end files)
frontend-imagestream
apiVersion: v1
kind: ImageStream
metadata:
creationTimestamp: null
labels:
io.kompose.service: frontend
name: frontend
spec:
tags:
- annotations: null
from:
kind: DockerImage
name: gcr.io/google-samples/gb-frontend:v4
generation: null
importPolicy: {}
name: v4
status:
dockerImageRepository: ""
frontend-service
apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: C:\tools\kompose-windows-amd64.exe convert --provider=openshift
kompose.service.type: LoadBalancer
kompose.version: 1.19.0 (f63a961c)
creationTimestamp: null
labels:
io.kompose.service: frontend
name: frontend
spec:
ports:
- name: "8080"
port: 8080
targetPort: 8080
selector:
io.kompose.service: frontend
type: LoadBalancer
status:
loadBalancer: {}
frontend-deploymentconfig
apiVersion: v1
kind: DeploymentConfig
metadata:
annotations:
kompose.cmd: C:\tools\kompose-windows-amd64.exe convert --provider=openshift
kompose.service.type: LoadBalancer
kompose.version: 1.19.0 (f63a961c)
creationTimestamp: null
labels:
io.kompose.service: frontend
name: frontend
spec:
replicas: 1
selector:
io.kompose.service: frontend
strategy:
resources: {}
template:
metadata:
creationTimestamp: null
labels:
io.kompose.service: frontend
spec:
containers:
- env:
- name: GET_HOSTS_FROM
value: dns
image: ' '
name: frontend
ports:
- containerPort: 8080
resources: {}
restartPolicy: Always
test: false
triggers:
- type: ConfigChange
- imageChangeParams:
automatic: true
containerNames:
- frontend
from:
kind: ImageStreamTag
name: frontend:v4
type: ImageChange
status: {}
Added all logs (I removed Redis and left only FrontEnd service since it was the only causing issue):
Windows PowerShell
Copyright (C) Microsoft Corporation. Todos os direitos reservados.
Experimente a nova plataforma cruzada PowerShell https://aka.ms/pscore6
PS C:\Windows\system32> cd C:\to_learn\docker-compose-to-minishift\first-try
PS C:\to_learn\docker-compose-to-minishift\first-try> kompose-windows-amd64 up --provider=openshift
[36mINFO[0m We are going to create OpenShift DeploymentConfigs, Services and PersistentVolumeClaims for your Dockerized application.
If you need different kind of resources, use the 'kompose convert' and 'oc create -f' commands instead.
[36mINFO[0m Deploying application in "myproject" namespace
[36mINFO[0m Successfully created Service: frontend
[36mINFO[0m Successfully created DeploymentConfig: frontend
[36mINFO[0m Successfully created ImageStream: frontend
Your application has been deployed to OpenShift. You can run 'oc get dc,svc,is,pvc' for details.
PS C:\to_learn\docker-compose-to-minishift\first-try> oc expose service/frontend
route.route.openshift.io/frontend exposed
PS C:\to_learn\docker-compose-to-minishift\first-try> minishift openshift service frontend --namespace=myproject
|-----------|----------|----------------------|-------------------------------------------------|--------|
| NAMESPACE | NAME | NODEPORT | ROUTE-URL | WEIGHT |
|-----------|----------|----------------------|-------------------------------------------------|--------|
| myproject | frontend | 192.168.99.101:30215 | http://frontend-myproject.192.168.99.101.nip.io | |
|-----------|----------|----------------------|-------------------------------------------------|--------|
PS C:\to_learn\docker-compose-to-minishift\first-try>
And when I try to open http://frontend-myproject.192.168.99.101.nip.io in Chrome:
Application is not available
The application is currently not serving requests at this endpoint. It may not have been started or is still starting.
...
Edited (trying deploying another sample application)
PS C:\to_learn\docker-compose-to-minishift\first-try> nslookup x.127.0.0.1.xip.io
Servidor: one.one.one.one
Address: 1.1.1.1
Não é resposta autoritativa:
Nome: x.127.0.0.1.xip.io
Address: 127.0.0.1
PS C:\to_learn\docker-compose-to-minishift\first-try> oc version
oc v3.11.0+0cbc58b
kubernetes v1.11.0+d4cacc0
features: Basic-Auth SSPI Kerberos SPNEGO
Server https://192.168.99.101:8443
kubernetes v1.11.0+d4cacc0
PS C:\to_learn\docker-compose-to-minishift\first-try> oc new-app --name='cotd' --labels name='cotd' php~https://github.com/devopswith-openshift/cotd.git -e SELECTOR=cats
--> Found image dc5aa55 (2 months old) in image stream "openshift/php" under tag "7.1" for "php"
Apache 2.4 with PHP 7.1
-----------------------
PHP 7.1 available as container is a base platform for building and running various PHP 7.1 applications and frameworks. PHP is an HTML-embedded scripting language. PHP attempts to make it easy for developers to write dynamically generated web pages. PHP also offers built-in database integration for several commercial and non-commercial database management systems, so writing a database-enabled webpage with PHP is fairly simple. The most common use of PHP coding is probably as a replacement for CGI scripts.
Tags: builder, php, php71, rh-php71
* A source build using source code from https://github.com/devopswith-openshift/cotd.git will be created
* The resulting image will be pushed to image stream tag "cotd:latest"
* Use 'start-build' to trigger a new build
* This image will be deployed in deployment config "cotd"
* Ports 8080/tcp, 8443/tcp will be load balanced by service "cotd"
* Other containers can access this service through the hostname "cotd"
--> Creating resources with label name=cotd ...
imagestream.image.openshift.io "cotd" created
buildconfig.build.openshift.io "cotd" created
deploymentconfig.apps.openshift.io "cotd" created
service "cotd" created
--> Success
Build scheduled, use 'oc logs -f bc/cotd' to track its progress.
Application is not exposed. You can expose services to the outside world by executing one or more of the commands below:
'oc expose svc/cotd'
Run 'oc status' to view your app.
PS C:\to_learn\docker-compose-to-minishift\first-try> oc expose svc/cotd
route.route.openshift.io/cotd exposed
PS C:\to_learn\docker-compose-to-minishift\first-try> oc status
In project myproject on server https://192.168.99.101:8443
http://cotd-myproject.192.168.99.101.nip.io to pod port 8080-tcp (svc/cotd)
dc/cotd deploys istag/cotd:latest <-
bc/cotd source builds https://github.com/devopswith-openshift/cotd.git on openshift/php:7.1
build #1 pending for 11 minutes
deployment #1 waiting on image or update
http://frontend-myproject.192.168.99.101.nip.io to pod port 8080 (svc/frontend)
dc/frontend deploys istag/frontend:v4
deployment #1 waiting on image or update
4 infos identified, use 'oc status --suggest' to see details.
PS C:\to_learn\docker-compose-to-minishift\first-try> oc status --suggest
In project myproject on server https://192.168.99.101:8443
http://cotd-myproject.192.168.99.101.nip.io to pod port 8080-tcp (svc/cotd)
dc/cotd deploys istag/cotd:latest <-
bc/cotd source builds https://github.com/devopswith-openshift/cotd.git on openshift/php:7.1
build #1 pending for 12 minutes
deployment #1 waiting on image or update
http://frontend-myproject.192.168.99.101.nip.io to pod port 8080 (svc/frontend)
dc/frontend deploys istag/frontend:v4
deployment #1 waiting on image or update
Info:
* dc/cotd has no readiness probe to verify pods are ready to accept traffic or ensure deployment is successful.
try: oc set probe dc/cotd --readiness ...
* dc/cotd has no liveness probe to verify pods are still running.
try: oc set probe dc/cotd --liveness ...
* dc/frontend has no readiness probe to verify pods are ready to accept traffic or ensure deployment is successful.
try: oc set probe dc/frontend --readiness ...
* dc/frontend has no liveness probe to verify pods are still running.
try: oc set probe dc/frontend --liveness ...
View details with 'oc describe <resource>/<name>' or list everything with 'oc get all'.
PS C:\to_learn\docker-compose-to-minishift\first-try>
Build Pending Status
It happens when you use a non-root container like bitnami official images.
We used user:root and network_mode: host when it needs to get bind with host network.
apache:
image: bitnami/apache:2.4
container_name: "apache"
ports:
- 80:80
network_mode: host
privileged: true
user: root
environment:
DOCKER_HOST: "unix:///var/run/docker.sock"
env_file:
- .env
volumes:
- ./setup/apache/httpd.conf:/opt/bitnami/apache/conf/httpd.conf
Trying to do something that should be pretty simple: starting up an Express pod and fetch the localhost:5000/ which should respond with Hello World!.
I've installed ingress-nginx for Docker for Mac and minikube
Mandatory: kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml
Docker for Mac: kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/cloud-generic.yaml
minikube: minikube addons enable ingress
I run skaffold dev --tail
It prints out Example app listening on port 5000, so apparently is running
Navigate to localhost and localhost:5000 and get a "Could not get any response" error
Also, tried minikube ip which is 192.168.99.100 and experience the same results
Not quite sure what I am doing wrong here. Code and configs are below. Suggestions?
index.js
// Import dependencies
const express = require('express');
// Set the ExpressJS application
const app = express();
// Set the listening port
// Web front-end is running on port 3000
const port = 5000;
// Set root route
app.get('/', (req, res) => res.send('Hello World!'));
// Listen on the port
app.listen(port, () => console.log(`Example app listening on port ${port}`));
skaffold.yaml
apiVersion: skaffold/v1beta15
kind: Config
build:
local:
push: false
artifacts:
- image: sockpuppet/server
context: server
docker:
dockerfile: Dockerfile.dev
sync:
manual:
- src: '**/*.js'
dest: .
deploy:
kubectl:
manifests:
- k8s/ingress-service.yaml
- k8s/server-deployment.yaml
- k8s/server-cluster-ip-service.yaml
ingress-service.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- http:
paths:
- path: /?(.*)
backend:
serviceName: server-cluster-ip-service
servicePort: 5000
server-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: server-deployment
spec:
replicas: 3
selector:
matchLabels:
component: server
template:
metadata:
labels:
component: server
spec:
containers:
- name: server
image: sockpuppet/server
ports:
- containerPort: 5000
server-cluster-ip-service.yaml
apiVersion: v1
kind: Service
metadata:
name: server-cluster-ip-service
spec:
type: ClusterIP
selector:
component: server
ports:
- port: 5000
targetPort: 5000
Dockerfile.dev
FROM node:12.10-alpine
EXPOSE 5000
WORKDIR "/app"
COPY ./package.json ./
RUN npm install
COPY . .
CMD ["npm", "run", "dev"]
Output from describe
$ kubectl describe ingress ingress-service
Name: ingress-service
Namespace: default
Address:
Default backend: default-http-backend:80 (<none>)
Rules:
Host Path Backends
---- ---- --------
localhost
/ server-cluster-ip-service:5000 (172.17.0.7:5000,172.17.0.8:5000,172.17.0.9:5000)
Annotations:
kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{"kubernetes.io/ingress.class":"nginx"},"name":"ingress-service","namespace":"default"},"spec":{"rules":[{"host":"localhost","http":{"paths":[{"backend":{"serviceName":"server-cluster-ip-service","servicePort":5000},"path":"/"}]}}]}}
kubernetes.io/ingress.class: nginx
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal CREATE 16h nginx-ingress-controller Ingress default/ingress-service
Normal CREATE 21s nginx-ingress-controller Ingress default/ingress-service
Output from kubectl get po -l component=server
$ kubectl get po -l component=server
NAME READY STATUS RESTARTS AGE
server-deployment-cf6dd5744-2rnh9 1/1 Running 0 11s
server-deployment-cf6dd5744-j9qvn 1/1 Running 0 11s
server-deployment-cf6dd5744-nz4nj 1/1 Running 0 11s
Output from kubectl describe pods server-deployment: Noticed that the Host Port: 0/TCP. Possibly the issue?
Name: server-deployment-6b78885779-zttns
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: minikube/10.0.2.15
Start Time: Tue, 08 Oct 2019 19:54:03 -0700
Labels: app.kubernetes.io/managed-by=skaffold-v0.39.0
component=server
pod-template-hash=6b78885779
skaffold.dev/builder=local
skaffold.dev/cleanup=true
skaffold.dev/deployer=kubectl
skaffold.dev/docker-api-version=1.39
skaffold.dev/run-id=c545df44-a37d-4746-822d-392f42817108
skaffold.dev/tag-policy=git-commit
skaffold.dev/tail=true
Annotations: <none>
Status: Running
IP: 172.17.0.5
Controlled By: ReplicaSet/server-deployment-6b78885779
Containers:
server:
Container ID: docker://2d0aba8f5f9c51a81f01acc767e863b7321658f0a3d0839745adb99eb0e3907a
Image: sockpuppet/server:668dfe550d93a0ae76eb07e0bab900f3968a7776f4f177c97f61b18a8b1677a7
Image ID: docker://sha256:668dfe550d93a0ae76eb07e0bab900f3968a7776f4f177c97f61b18a8b1677a7
Port: 5000/TCP
Host Port: 0/TCP
State: Running
Started: Tue, 08 Oct 2019 19:54:05 -0700
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-qz5kr (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-qz5kr:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-qz5kr
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned default/server-deployment-6b78885779-zttns to minikube
Normal Pulled 7s kubelet, minikube Container image "sockpuppet/server:668dfe550d93a0ae76eb07e0bab900f3968a7776f4f177c97f61b18a8b1677a7" already present on machine
Normal Created 7s kubelet, minikube Created container server
Normal Started 6s kubelet, minikube Started container server
OK, got this sorted out now.
It boils down to the kind of Service being used: ClusterIP.
ClusterIP: Exposes the service on a cluster-internal IP. Choosing this value makes the service only reachable from within the cluster. This is the default ServiceType.
If I am wanting to connect to a Pod or Deployment directly from outside of the cluster (something like Postman, pgAdmin, etc.) and I want to do it using a Service, I should be using NodePort:
NodePort: Exposes the service on each Node’s IP at a static port (the NodePort). A ClusterIP service, to which the NodePort service will route, is automatically created. You’ll be able to contact the NodePort service, from outside the cluster, by requesting <NodeIP>:<NodePort>.
So in my case, if I want to continue using a Service, I'd change my Service manifest to:
apiVersion: v1
kind: Service
metadata:
name: server-cluster-ip-service
spec:
type: NodePort
selector:
component: server
ports:
- port: 5000
targetPort: 5000
nodePort: 31515
Making sure to manually set nodePort: <port> otherwise it is kind of random and a pain to use.
Then I'd get the minikube IP with minikube ip and connect to the Pod with 192.168.99.100:31515.
At that point, everything worked as expected.
But that means having separate sets of development (NodePort) and production (ClusterIP) manifests, which is probably totally fine. But I want my manifests to stay as close to the production version (i.e. ClusterIP).
There are a couple ways to get around this:
Using something like Kustomize where you can set a base.yaml and then have overlays for each environment where it just changes the relevant info avoiding manifests that are mostly duplicative.
Using kubectl port-forward. I think this is the route I am going to go. That way I can keep my one set of production manifests, but when I want to QA Postgres with pgAdmin I can do:
kubectl port-forward services/postgres-cluster-ip-service 5432:5432
Or for the back-end and Postman:
kubectl port-forward services/server-cluster-ip-service 5000:5000
I'm playing with doing this through the ingress-service.yaml using nginx-ingress, but don't have that working quite yet. Will update when I do. But for me, port-forward seems the way to go since I can just have one set of production manifests that I don't have to alter.
Skaffold Port-Forwarding
This is even better for my needs. Appending this to the bottom of the skaffold.yaml and is basically the same thing as kubectl port-forward without tying up a terminal or two:
portForward:
- resourceType: service
resourceName: server-cluster-ip-service
port: 5000
localPort: 5000
- resourceType: service
resourceName: postgres-cluster-ip-service
port: 5432
localPort: 5432
Then run skaffold dev --port-forward.