I'm following below to launch a multi-container app (db and web-app). Following is based on this.
--- BLOW STEPS ARE COPIED FROM ANSWER PROVIDED BY THIS USER docker mysql in kuberneted ERROR 2005 (HY000): Unknown MySQL server host '' (-3) ---
First, use your favorite editor to start a eramba-cm.yaml file:
apiVersion: v1
kind: ConfigMap
metadata:
name: eramba
namespace: eramba-1
data:
c2.8.1.sql: |
CREATE DATABASE IF NOT EXISTS erambadb;
USE erambadb;
## IMPORTANT: MUST BE INDENT 2 SPACES AFTER c2.8.1.sql ##
<copy & paste content from here: https://raw.githubusercontent.com/markz0r/eramba-community-docker/master/sql/c2.8.1.sql>
kubectl create -f eramba-cm.yaml
Create the storage for MariaDB:
cat << EOF > eramba-storage.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: eramba-storage
spec:
storageClassName: eramba-storage
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /home/osboxes/eramba/erambadb
type: DirectoryOrCreate
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: eramba-storage
namespace: eramba-1
spec:
storageClassName: eramba-storage
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
...
EOF
kubectl create -f eramba-storage.yaml
Install bitnami/mariadb using Helm
helm repo add bitnami https://charts.bitnami.com/bitnami
helm upgrade -i eramba bitnami/mariadb --set auth.rootPassword=eramba,auth.database=erambadb,initdbScriptsConfigMap=eramba,volumePermissions.enabled=true,primary.persistence.existingClaim=eramba-storage --namespace eramba-1 --set mariadb.volumePermissions.enabled=true
Run eramba web application:
apiVersion: apps/v1
kind: Deployment
metadata:
name: eramba-web
namespace: eramba-1
labels:
app.kubernetes.io/name: eramba-web
spec:
replicas: 1
selector:
matchLabels:
app: eramba-web
template:
metadata:
labels:
app: eramba-web
spec:
containers:
- name: eramba-web
image: markz0r/eramba-app:c281
imagePullPolicy: IfNotPresent
env:
- name: MYSQL_HOSTNAME
value: eramba-mariadb
- name: MYSQL_DATABASE
value: erambadb
- name: MYSQL_USER
value: root
- name: MYSQL_PASSWORD
value: eramba
- name: DATABASE_PREFIX
value: ""
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: eramba-web
namespace: eramba-1
labels:
app.kubernetes.io/name: eramba-web
spec:
ports:
- name: http
nodePort: 30045
port: 8080
protocol: TCP
targetPort: 8080
selector:
app.kubernetes.io/name: eramba-web
type: NodePort
...
Now browse eramba-web via port-forward or http://<node ip>:30045.
The kubectl get cm,pvc,pv,svc,pods output is:
root#osboxes:~# kubectl get cm,pvc,pv,svc,pods -o wide -n eramba-1
NAME DATA AGE
configmap/eramba 1 134m
configmap/eramba-mariadb 1 131m
configmap/kube-root-ca.crt 1 29h
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE VOLUMEMODE
persistentvolumeclaim/eramba-storage Bound eramba-storage 5Gi RWO eramba-storage 133m Filesystem
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE VOLUMEMODE
persistentvolume/eramba-storage 5Gi RWO Retain Bound eramba-1/eramba-storage eramba-storage 133m Filesystem
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/eramba-mariadb ClusterIP 10.104.161.85 <none> 3306/TCP 131m app.kubernetes.io/component=primary,app.kubernetes.io/instance=eramba,app.kubernetes.io/name=mariadb
service/eramba-web NodePort 10.100.185.75 <none> 8080:30045/TCP 129m app.kubernetes.io/name=eramba-web
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/eramba-mariadb-0 1/1 Running 0 131m 10.20.0.6 osboxes <none> <none>
pod/eramba-web-6cc9c687d8-k6r9j 1/1 Running 0 129m 10.20.0.7 osboxes <none> <none>
When I tried to access 10.100.185.75:30045, the browser is says not reachable.
root#osboxes:/home/osboxes/eramba# kubectl describe service/eramba-web -n eramba-1
Name: eramba-web
Namespace: eramba-1
Labels: app.kubernetes.io/name=eramba-web
Annotations: <none>
Selector: app.kubernetes.io/name=eramba-web
Type: NodePort
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.100.185.75
IPs: 10.100.185.75
Port: http 8080/TCP
TargetPort: 8080/TCP
NodePort: http 30045/TCP
Endpoints: <none>
Session Affinity: None
External Traffic Policy: Cluster
Events:
the logs for the web-app pod:
root#osboxes:~# kubectl logs eramba-web-6cc9c687d8-k6r9j -n eramba-1
AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.20.0.7. Set the 'ServerName' directive globally to suppress this message
root#osboxes:~#
I've noticed the lack of endpoint for the Eramba-web service. When I changed the selector app to eramba-web, the endpoint has an IP, but the browser still cant reach the app.
This is a community wiki answer posted for better visibility. Feel free to expand it.
The requester uses the NodePort type for the eramba-web service. To access the application, it necessary to use the IP addresses of the nodes in the cluster, instead of using the internal IP address 10.100.x.y.
From Kubernetes documentation:
NodePort: Exposes the Service on each Node's IP at a static port (the
NodePort). A ClusterIP Service, to which the NodePort Service routes,
is automatically created. You'll be able to contact the NodePort
Service, from outside the cluster, by requesting <NodeIP>:<NodePort>.
Related
I'm new to Kubernetes and I live some problems.
I have a ubuntu server and I working on it. I created pods and services, also I have an API-Gateway pod and service. And I want to reach this pod with my ubuntu server IP address from my PC.
But I cannot reach this pod from outside of the server.
My app on the docker image is running on 80 port.
My api-gateway.yaml file is like that:
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-gateway
spec:
replicas: 1
selector:
matchLabels:
app: api-gateway
template:
metadata:
labels:
app: api-gateway
spec:
containers:
- name: api-gateway
image: myapi/api-gateway
---
apiVersion: v1
kind: Service
metadata:
name: api-gateway
spec:
selector:
app: api-gateway
ports:
- name: api-gateway
protocol: TCP
port: 80
targetPort: 80
nodePort: 30007
type: NodePort
externalIPs:
- <My Ubuntu Server IP Adress>
and when I type kubectl get services api-gateway, I get
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
api-gateway NodePort 10.104.42.32 <MyUbuntuS IP> 80:30007/TCP 131m
also when I type kubectl describe services api-gateway, I get
Name: api-gateway
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=api-gateway
Type: NodePort
IP Families: <none>
IP: 10.104.42.32
IPs: 10.104.42.32
External IPs: <My Ubuntu Server IP Adress>
Port: api-gateway 80/TCP
TargetPort: 80/TCP
NodePort: api-gateway 30007/TCP
Endpoints: 172.17.0.4:80
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Type 30m service-controller ClusterIP -> LoadBalancer
Normal Type 6m10s service-controller NodePort -> LoadBalancer
Normal Type 77s (x2 over 9m59s) service-controller LoadBalancer -> NodePort
So, how can I reach this pod on my PC's browser or Postman?
Implementation Goal
Expose Zookeeper instance, running on kubernetes, to the internet.
(configuration & version information provided at the bottom)
Implementation Attempt
I currently have a minikube cluster running on ubuntu 14.04, backed by docker containers.
I'm running a bare metal k8s cluster, and I'm trrying to expose a zookeeper service to the internet. Seeing as my cluster is not running on a cloud provider, I set up metallb, in order to provide a network-loadbalancer implementation for my zookeeper service.
On startup everything looks good, an external IP is assigned and I can access it from the same host via a curl command.
$ kubectl get pods -n metallb-system
NAME READY STATUS RESTARTS AGE
controller-5c9894b5cd-9gh8m 1/1 Running 0 5h59m
speaker-j2z8q 1/1 Running 0 5h59m
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.xxx.xxx.xxx <none> 443/TCP 6d19h
zk-cs LoadBalancer 10.xxx.xxx.xxx 172.1.1.x 2181:30035/TCP 56m
zk-hs LoadBalancer 10.xxx.xxx.xxx 172.1.1.x 2888:30664/TCP,3888:31113/TCP 6m15s
When I curl the above mentioned external IP's, I get a valid response
$ curl -D- "http://172.1.1.x:2181"
curl: (52) Empty reply from server
So far it all looks good, I can access the LB from outside the cluster with no issues, but this is where my lack of Kubernetes/Networking knowledge gets me.I'm finding it impossible to expose this LB to the internet. I've tried running minikube tunnel which I had high hopes for, only to be deeply disappointed.
Running a curl command from another node, whilst minikube tunnel is running will just see the request time out.
$ curl -D- "http://172.1.1.x:2181"
curl: (28) Failed to connect to 172.1.1.x port 2181: Timed out
At this point, as I mentioned before, I'm stuck.
Is there any way that I can get this service exposed to the internet without giving my soul to AWS or GCP?
Any help will be greatly appreciated.
Service Configuration
apiVersion: v1
kind: Service
metadata:
name: zk-hs
labels:
app: zk
spec:
selector:
app: zk
ports:
- port: 2888
targetPort: 2888
name: server
protocol: TCP
- port: 3888
targetPort: 3888
name: leader-election
protocol: TCP
clusterIP: ""
type: LoadBalancer
---
apiVersion: v1
kind: Service
metadata:
name: zk-cs
labels:
app: zk
spec:
selector:
app: zk
ports:
- name: client
protocol: TCP
port: 2181
targetPort: 2181
type: LoadBalancer
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: zk-pdb
spec:
selector:
matchLabels:
app: zk
maxUnavailable: 1
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: zk
spec:
selector:
matchLabels:
app: zk
serviceName: zk-hs
replicas: 1
updateStrategy:
type: RollingUpdate
podManagementPolicy: OrderedReady
template:
metadata:
labels:
app: zk
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: "app"
operator: In
values:
- zk
topologyKey: "kubernetes.io/hostname"
containers:
- name: zookeeper
imagePullPolicy: Always
image: "library/zookeeper:3.6"
resources:
requests:
memory: "1Gi"
cpu: "0.5"
ports:
- containerPort: 2181
name: client
- containerPort: 2888
name: server
- containerPort: 3888
name: leader-election
volumeMounts:
- name: datadir
mountPath: /var/lib/zookeeper
- name: zoo-config
mountPath: /conf
volumes:
- name: zoo-config
configMap:
name: zoo-config
securityContext:
fsGroup: 2000
runAsUser: 1000
runAsNonRoot: true
volumeClaimTemplates:
- metadata:
name: datadir
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 10Gi
---
apiVersion: v1
kind: ConfigMap
metadata:
name: zoo-config
namespace: default
data:
zoo.cfg: |
tickTime=10000
dataDir=/var/lib/zookeeper
clientPort=2181
initLimit=10
syncLimit=4
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 172.1.1.1-172.1.1.10
minikube: v1.13.1
docker: 18.06.3-ce
You can do it with minikube, but the idea of minikube is just to test stuff on your local environment. So, by default, it does not have the correct IPTable permissions, and yes you can adjust that, but if your goal is only to use without any loud provider, I'll higly recommend you to use kubeadm (https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/).
This tool will provide you a very customizable cluster configuration and you will be able to set your network problems without headaches.
I set up a simple redis ClusterIP service to be accessed by a php LoadBalancer service inside the Cluster. The php log shows the connection timeout error. The redis service is not accessible.
'production'.ERROR: Operation timed out {"exception":"[object] (RedisException(code: 0):
Operation timed out at /var/www/html/vendor/laravel/framework/src/Illuminate/Redis
/html/vendor/laravel/framework/src/Illuminate/Redis/Connectors/PhpRedisConnector.php(109):
Redis->connect('redis-svc', '6379', 0, '', 0, 0)
My redis service is quite simple so I don't know what went wrong:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
io.kompose.service: redis
name: redis
spec:
replicas: 1
strategy: {}
template:
metadata:
labels:
io.kompose.service: redis
spec:
containers:
- image: redis:alpine
name: redis
resources: {}
ports:
- containerPort: 6379
restartPolicy: Always
status: {}
---
kind: Service
apiVersion: v1
metadata:
name: redis-svc
spec:
selector:
app: redis
ports:
- protocol: TCP
port: 6379
targetPort: 6379
type: ClusterIP
I verify redis-svc is running, so why it can't be access by other service
kubectl get service redis-svc git:k8s*
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
redis-svc ClusterIP 10.101.164.225 <none> 6379/TCP 22m
This SO kubernetes cannot ping another service said ping doesn't work with service's cluster IP(indeed) how do I verify whether redis-svc can be accessed or not ?
---- update ----
My first question was a silly mistake but I still don't know how do I verify whether the service can be accessed or not (by its name). For example I changed the service name to be the same as the deployment name and I found php failed to access redis again.
kubectl get endpoints did not help now.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: redis
...
status: {}
---
kind: Service
apiVersion: v1
metadata:
name: redis
...
my php is another service with env set the redis's service name
spec:
containers:
- env:
- name: REDIS_HOST # the php code access this variable
value: redis-svc #changed to "redis" when redis service name changed to "redis"
----- update 2------
The reason I can' set my redis service name to "redis" is b/c "kubelet adds a set of environment variables for each active Service" so with the name "redis", there will be a REDIS_PORT=tcp://10.101.210.23:6379 which overwrite my own REDIS_PORT=6379
But my php just expect the value of REDIS_PORT to be 6379
I ran the yaml configuration given by you and it created the deployment and service. However when I run the below commands:
>>> kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 5d14h
redis-svc ClusterIP 10.105.31.201 <none> 6379/TCP 109s
>>>> kubectl get endpoints
NAME ENDPOINTS AGE
kubernetes 192.168.99.116:8443 5d14h
redis-svc <none> 78s
As you see, the endpoints for redis-svc is none, it means that the service doesn't have an endpoint to connect to. You are using selector labels as app: redis in the redis-svc. But the pods doesn't have the selector label defined in the service. Adding the label app: redis to the pod template will work. The complete working yaml configuration of deployment will look like:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
io.kompose.service: redis
name: redis
spec:
replicas: 1
strategy: {}
template:
metadata:
labels:
io.kompose.service: redis
app: redis
spec:
containers:
- image: redis:alpine
name: redis
resources: {}
ports:
- containerPort: 6379
restartPolicy: Always
status: {}
I have a problem with accessible my service from outside.
First of all, here is my conf yaml files:
nginx-pod.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nginx
namespace: development
spec:
selector:
matchLabels:
app: my-nginx
replicas: 2
template:
metadata:
labels:
app: my-nginx
spec:
containers:
- name: my-nginx
image: nginx:1.7.9
ports:
- containerPort: 80
nginx-service.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx-service
namespace: development
spec:
type: LoadBalancer
selector:
app: my-nginx
ports:
- name: http
port: 80
targetPort: 80
protocol: TCP
metallb-config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 51.15.41.227-51.15.41.227
Then i have created the cluster. Command kubectl get all -o wide prints:
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
pod/my-nginx-5796dcf6c4-rxl6k 1/1 Running 1 20h 10.244.0.16 scw-7d6c86
pod/my-nginx-5796dcf6c4-zf7vd 1/1 Running 0 20h 10.244.1.4 scw-7a7908
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/nginx-service LoadBalancer 10.100.63.177 51.15.41.227 80:30883/TCP 54m app=my-nginx
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
deployment.apps/my-nginx 2 2 2 2 20h my-nginx nginx:1.7.9 app=my-nginx
NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
replicaset.apps/my-nginx-5796dcf6c4 2 2 2 20h my-nginx nginx:1.7.9 app=my-nginx,pod-template-hash=5796dcf6c4
Everythink is fine, also kubectl describe service/nginx-service prints:
Name: nginx-service
Namespace: development
Labels:
Annotations:
Selector: app=my-nginx
Type: LoadBalancer
IP: 10.100.63.177
LoadBalancer Ingress: 51.15.41.227
Port: http 80/TCP
TargetPort: 80/TCP
NodePort: http 30883/TCP
Endpoints: 10.244.0.16:80,10.244.1.4:80
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal IPAllocated 56m metallb-controller Assigned IP "51.15.41.227"
Curl command inside master server curl 51.15.41.227 prints Welcome to nginx blablabla. Next i tried to open from another network, it doesn't work, however i added node port it works curl 51.15.41.227:30883. All this i did on a bare-metal. I expected to happen curl 51.15.41.227 from external host should reach result.
What did i do wrong?
Definitely it will work with http://51.15.41.227 or 51.15.41.227:80. You can upvote answer by pressing up button.
You should definitely use the node port 30883(randomly assigned port) while accessing from External Network. Otherwise it don't know where to route the request.
curl http://51.15.41.227:30883
I've created an example Rails 5 app that uses Google Cloud PostgreSQL.
I'm able to run the app locally with docker-compose up, but I'm not able to connect to it remote when I deploy it to GCP.
I tried to replicate https://cloud.google.com/ruby/tutorials/bookshelf-on-kubernetes-engine where they use targetPort: http-server
The rails app is published on Github.
Am I doing anything obviously wrong? :-|
Running the app locally works
git clone git#github.com:stabenfeldt/k8s-colors.git
docker-compose up -d
docker-compose run colors rake db:create db:migrate
open http://localhost:3000
Create a GKE cluster
gcloud container clusters create color-cluster --num-nodes=2
Setup PostgreSQL Cloud SQL
I followed the instructions from https://cloud.google.com/sql/docs/postgres/connect-kubernetes-engine?authuser=1
and updated my config/database.yml and k8s/colors.yml with these values.
Deployed but stuck on ContainerCreating
kubectl apply -f k8s/colors.yml
kubectl get pods
NAME READY STATUS RESTARTS AGE
colors-d9f744dc-d5l5v 0/2 ContainerCreating 0 5m
colors-d9f744dc-spmws 0/2 ContainerCreating 0 5m
kubectl logs d9f744dc-d5l5v -c colors # => Nothing logged
kubectl get deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
colors 2 2 2 0 7m
But fails to connect to the app
kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
colors LoadBalancer 10.55.245.192 35.228.111.217 80:30746/TCP 1h
kubernetes ClusterIP 10.55.240.1 <none> 443/TCP 1h
curl 35.228.111.217 # => No response! :-/
kubectl describe svc colors
Name: colors
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"colors","namespace":"default"},"spec":{"ports":[{"port":80,"targetPort":3000}]...
Selector: app=colors
Type: LoadBalancer
IP: 10.55.252.91
LoadBalancer Ingress: 35.228.203.46
Port: <unset> 80/TCP
TargetPort: 3000/TCP
NodePort: <unset> 30964/TCP
Endpoints: <none>
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Type 1m service-controller ClusterIP -> LoadBalancer
Normal EnsuringLoadBalancer 1m service-controller Ensuring load balancer
Normal EnsuredLoadBalancer 30s service-controller Ensured load balancer
k8s/service.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: colors
labels:
app: colors
spec:
replicas: 2
selector:
matchLabels:
app: colors
template:
metadata:
labels:
app: colors
spec:
containers:
- name: colors
image: docker.io/stabenfeldt/colors:latest
ports:
- name: http-server
containerPort: 3000
env:
- name: POSTGRES_HOST
value: 127.0.0.1:5432
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: username
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: password
- name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.11
command: ["/cloud_sql_proxy",
"-instances=PROJECT_ID:europe-west1:staging=tcp:5432",
"-credential_file=/secrets/cloudsql/credentials.json"]
volumeMounts:
- name: cloudsql-instance-credentials
mountPath: /secrets/cloudsql
readOnly: true
volumes:
- name: cloudsql-instance-credentials
secret:
secretName: cloudsql-instance-credentials
---
apiVersion: v1
kind: Service
metadata:
name: colors
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 3000
selector:
app: colors
kubectl describe deployment
Name: colors
Namespace: default
CreationTimestamp: Fri, 13 Jul 2018 10:37:06 +0200
Labels: app=colors
Annotations: deployment.kubernetes.io/revision=1
kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"labels":{"app":"colors"},"name":"colors","namespace":"default"},"spec":{"repl...
Selector: app=colors
Replicas: 2 desired | 2 updated | 2 total | 0 available | 2 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: app=colors
Containers:
colors:
Image: docker.io/stabenfeldt/colors:latest
Port: 3000/TCP
Environment:
POSTGRES_HOST: 127.0.0.1:5432
POSTGRES_USER: <set to the key 'username' in secret 'cloudsql-db-credentials'> Optional: false
POSTGRES_PASSWORD: <set to the key 'password' in secret 'cloudsql-db-credentials'> Optional: false
Mounts: <none>
cloudsql-proxy:
Image: gcr.io/cloudsql-docker/gce-proxy:1.11
Port: <none>
Command:
/cloud_sql_proxy
-instances=MY-INSTANCE:europe-west1:staging=tcp:5432
-credential_file=/secrets/cloudsql/credentials.json
Environment: <none>
Mounts:
/secrets/cloudsql from cloudsql-instance-credentials (ro)
Volumes:
cloudsql-instance-credentials:
Type: Secret (a volume populated by a Secret)
SecretName: cloudsql-instance-credentials
Optional: false
Conditions:
Type Status Reason
---- ------ ------
Available False MinimumReplicasUnavailable
Progressing True ReplicaSetUpdated
OldReplicaSets: <none>
NewReplicaSet: colors-d9f744dc (2/2 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 1m deployment-controller Scaled up replica set colors-d9f744dc to 2
kubectl describe service
Name: colors
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"colors","namespace":"default"},"spec":{"ports":[{"port":80,"targetPort":3000}]...
Selector: app=colors
Type: LoadBalancer
IP: 10.55.252.91
LoadBalancer Ingress: 35.228.203.46
Port: <unset> 80/TCP
TargetPort: 3000/TCP
NodePort: <unset> 30964/TCP
Endpoints: <none>
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Type 4m service-controller ClusterIP -> LoadBalancer
Normal EnsuringLoadBalancer 4m service-controller Ensuring load balancer
Normal EnsuredLoadBalancer 3m service-controller Ensured load balancer
Name: kubernetes
Namespace: default
Labels: component=apiserver
provider=kubernetes
Annotations: <none>
Selector: <none>
Type: ClusterIP
IP: 10.55.240.1
Port: https 443/TCP
TargetPort: 443/TCP
Endpoints: 35.228.79.249:443
Session Affinity: ClientIP
Events: <none>
I don't see anything wrong outright, but here are a few tips to verifying your Kubernetes Objects look like they should compared to your yamls:
Use the describe command to get more information about objects and make sure they are set up correctly.
For example, if you do kubectl describe deployment <deployment_name> you should verify the following line is present:
Port: 3000/TCP
And for your Service - kubectl describe service <service_name>:
LoadBalancer Ingress: <PUBLIC_IP>
Port: <unset> 80/TCP
TargetPort: 3000/TCP
Finally, I'm not sure if you want to apply the following in your LoadBalancer:
labels:
app: colors
Since you are using this label as a selector, it may be doing something funky and trying to load balance to itself instead of your containers with the apps in it.
Also as a side note on your terminology, GCP (Google Cloud Platform) is the overarching name of Google's Services, GKE (Google Kubernetes Engine) is the service providing you with a managed Kuberenetes Cluster.
Hope this helps.
A working setup can be in my example Rails app at Github.
k8s/colors.yml
# Remember to update MY-INSTANCE
apiVersion: v1
kind: Service
metadata:
name: colors-frontend
labels:
app: colors
tier: frontend
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: http-server
selector:
app: colors
tier: frontend
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: colors-frontend
labels:
app: colors
tier: frontend
spec:
replicas: 3
template:
metadata:
labels:
app: colors
tier: frontend
spec:
volumes:
- name: cloudsql-instance-credentials
secret:
secretName: cloudsql-instance-credentials
containers:
- name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.11
command: ["/cloud_sql_proxy",
"-instances=MY-INSTANCE:europe-west1:development=tcp:5432",
"-credential_file=/secrets/cloudsql/credentials.json"]
volumeMounts:
- name: cloudsql-instance-credentials
mountPath: /secrets/cloudsql
readOnly: true
- name: colors-app
image: docker.io/stabenfeldt/colors:1
imagePullPolicy: Always
env:
- name: RAILS_LOG_TO_STDOUT
value: "true"
- name: RAILS_ENV
value: development
- name: POSTGRES_HOST
value: 127.0.0.1
- name: POSTGRES_USERNAME
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: username
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: password
ports:
- name: http-server
containerPort: 3000
Your POSTGRES_HOST environment variable needs to be localhost instead of 127.0.0.01:5432. You do not need to add port in the POSTGRES_HOST