How to connect to cloud sql from containered application in gcloud? - docker

I'm using GCloud, I have a kubernate cluster and a cloud sql instance.
I have a simple node.js app, that uses database. When I deploy with gcloud app deploy it has an access to a database. However, when I build a dockerimage and expose it, it cannot reach database.
I expose Docker application following: https://cloud.google.com/kubernetes-engine/docs/tutorials/hello-app
Cloud SQL deosn't have Private IP enabled, Im connecting using cloud sql proxy
In app.yaml I do specify base_settings:cloud_sql_instances. I use the same value in socketPath config for mysql connection.
The error in docker logs is:
(node:1) UnhandledPromiseRejectionWarning: Error: connect ENOENT /cloudsql/x-alcove-224309:europe-west1:learning
at PipeConnectWrap.afterConnect [as oncomplete] (net.js:1097:14)
Can you please explain me how to connect to cloud sql from dockerized node application.

When you deploy your app on App Engine with gcloud app deploy, the platform runs it in a container along with a side-car container in charge of running the cloud_sql_proxy (you ask for it by specifying the base_settings:cloud_sql_instances in your app.yaml file).
Kubernetes Engine doesn't use an app.yaml file and doesn't supply this side-car container to you so you'll have to set it up. The public doc shows how to do it by creating secrets for your database credentials and updating your deployment file with the side-car container config. An example shown in the doc would look like:
...
- name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.11
command: ["/cloud_sql_proxy",
"-instances=<INSTANCE_CONNECTION_NAME>=tcp:3306",
"-credential_file=/secrets/cloudsql/credentials.json"]
securityContext:
runAsUser: 2 # non-root user
allowPrivilegeEscalation: false
volumeMounts:
- name: cloudsql-instance-credentials
mountPath: /secrets/cloudsql
readOnly: true
...

Generally, the best method is to connect using a sidecar container inside the same pod as your application. You can find examples on the "Connecting from Google Kubernetes Engine" page here. There is also a codelab here that goes more in-depth and might be helpful.

The documentation mentions that it is possible to connect using an internal IP address.
Did somebody try it?

Related

How to call Redis inside Kubernetes? Problems removing Old Redis service

Previously I had been experimenting with this command on Docker for Desktop Kubernetes
helm install my-release --set password=password bitnami/redis
I had issued the command helm uninstall my-release.
Now I am trying to make my todolistclient work inside (Docker for Desktop) Kubernetes with redis:
kubectl run redis --image=bitnami/redis:latest --replicas=1 --port=6379 --labels="ver=1,app=todo,env=proto" --env="REDIS_PASSWORD=password" --env="REDIS_REPLICATION_MODE=master" --env="REDIS_MASTER_PASSWORD=password"
kubectl run todolistclient --image=siegfried01/todolistclient:latest --replicas=3 --port=5000 --labels="ver=1,app=todo,env=proto"
When I look at the log for ToDoListClient, I see a stack trace indicating that it is failing to connect to the redis server with this error message:
System.AggregateException: One or more errors occurred. (No connection is available to service this operation: EVAL; SocketFailure on my-release-redis-master.default.svc.cluster.local:6379/Subscription, origin: Error, input-buffer: 0, outstanding: 0, last-read: 0s ago, last-write: 0s ago, unanswered-write: 9760s ago, keep-alive: 60s, pending: 0, state: Connecting, last-heartbeat: never, last-mbeat: -1s ago, global: 0s ago)
What is this my-release-redis-master.default.svc.cluster.local? This has been uninstalled and I'm not running that any more.
My C# code is connecting to Redis with
.AddDistributedRedisCache(options => { options.InstanceName = "OIDCTokens"; options.Configuration = "redis,password=password"; })
Just to be certain that I was indeed using the above code and specifically "redis", I recompiled my code and pushed to DockerHub again and I am getting the same error again.
So apparently there is something left over from the helm version of redis that is translating "redis" into "my-release-redis-master". How do I remove this so I can connect to my current redis?
Thanks
Siegfried
In the todolistclient application you are using my-release-redis-master.default.svc.cluster.local:6379/Subscription. This is the url of a service exposing redis pod. This is automatically created by helm release.
If that is not desired then you need to change that url in todolistclient application to your redis service.
You have deployed redis but have not created any service to expose redis, Hence you can not use a service url to connect to it unless you create it.
So you have two options
Use redis pod IP in the todolistclient application. This is not recomended because Pod IP changes when restarted.
Create a service and then use that service url in todolistclient application.
apiVersion: v1
kind: Service
metadata:
name: redis-master
labels:
run: redis
spec:
ports:
- port: 6379
targetPort: 6379
selector:
run: redis
Here is a guide on how to deploy a guestbook application on kubernetes and connect to redis.
One suggestion don't use same labels for both todolistclient and redis
The problem was that I had originally changed my source code to accommodate the name generated by helm: my-release-redis-master and later restored the code to just use the domain name redis.
The confusion was from the fact that even though I was intending to compile and deploy (to Kubernetes) a debug version (which is the setting I had for Visual Studio), Visual studio was continuing to recompile the debug version but deploying that ancient release version with the bad domain name.
The GUI for the Visual Studio 2019 publish dialog apparently is broken and won't let you deploy in debug mode. (I wish I could find the file where that publish dialog stored its settings so I could correct it with notepad). It would have been nice if I had received a warning indicating that it was not deploying my latest build.
Arghya Sadhu's response was helpful because it gave me the confidence to say that this was not some weird feature of Kubernetes that causing my domain name to translated to the bogus my-release-redis-master.
Thank you Arghya.
So the solution was simple: recompile in release mode and deploy.
Siegfried

Container failed to start. Failed to start and then listen on the port defined by the PORT environment variable

I built my container image, but when I try to deploy it from the gcloud command line or the Cloud Console, I get the following error: "Container failed to start. Failed to start and then listen on the port defined by the PORT environment variable."
In your code, you probably aren't listening for incoming HTTP requests, or you're listening for incoming requests on the wrong port.
As documented in the Cloud Run container runtime contract, your container must listen for incoming HTTP requests on the port that is defined by Cloud Run and provided in the $PORT environment variable.
If your container fails to listen on the expected port, the revision health check will fail, the revision will be in an error state and the traffic will not be routed to it.
For example, in Node.js with Express, you should use :
const port = process.env.PORT || 8080;
app.listen(port, () => {
console.log('Hello world listening on port', port);
});
In Go:
port := os.Getenv("PORT")
if port == "" {
port = "8080"
}
log.Fatal(http.ListenAndServe(fmt.Sprintf(":%s", port), nil))
In python:
app.run(port=int(os.environ.get("PORT", 8080)),host='0.0.0.0',debug=True)
One of the other reason may be the one which I observed. Docker images may not have the required code to run the application.
I had a Node application written in TypeScript. In order to dockerize the application all I need to do is compile the code tsc and run docker build but I though that gcloud builds submit will be taking care of that and picking the compiled code as the Dockerfile suggested in conjunction to the .dockerignore and will build my source code and submit to the repository.
But what all it did was to copy my source code and submitted to the Cloud Build and there as per the Dockerfile it dockerized my source code as compared to dockerizing the compiled code.
So remember to include a build step in Dockerfile if you are doing a source code in a language with require compilation.
Remember that enabling the build step in the Dockerfile will increase the image size every time you do a image push to the repository. It is eating the space over there and google is going to charge you for that.
Another possibility is that the docker image ends with a command that takes time to complete. By the time deployment starts the server is not yet running and the health check will hit a blank.
What kind of command would that be ? Usually any command that runs the server in dev mode. For Scala/SBT it would be sbt run or in Node it would be something like npm run dev. In short make sure to run only on the packaged build.
I was exposing a PORT in dockerfile , remove that automatically fixed my problem. Google injects PORT env variable so the project will pick up that Env variable.
We can also specify the port number used by the image from the command line.
If we are using Cloud Run, we can use the following:
gcloud run deploy --image gcr.io/<PROJECT_ID>/<APP_NAME>:<APP_VERSION> --max-instances=3 --port <PORT_NO>
Where
<PROJECT_ID> is the project ID
<APP_NAME> is the app name
<APP_VERSION> is the app version
<PORT_NO> is the port number
The Cloud Run is generating default yaml file which has hard-coded default port in it:
spec:
containerConcurrency: 80
timeoutSeconds: 300
containers:
- image: us.gcr.io/project-test/express-image:1.0
ports:
- name: http1
containerPort: 8080
resources:
limits:
memory: 256Mi
cpu: 1000m
So, we need to expose the same 8080 port or change the containerPort in yaml file and redeploy.
Here is more about that:
A possible solution could be:
build locally
push the image on google cloud
deploy on google run
With commands:
docker build -t gcr.io/project-name/image-name
docker push gcr.io/project-name/image-name
gcloud run deploy tag-name --image gcr.io/project-name/image-name

Access Kubernetes pod's log files from inside the pod?

I'm currently migrating a legacy server to Kubernetes, and I found that kubectl or dashboard only shows the latest log file, not the older versions. In order to access the old files, I have to ssh to the node machine and search for it.
In addition to being a hassle, my team wants to restrict access to the node machines themselves, because they will be running pods from many different teams and unrestricted access could be a security issue.
So my question is: can I configure Kubernetes (or a Docker image) so that these old (rotated) log files are stored in some directory accessible from inside the pod itself?
Of course, in a pinch, I could probably just execute something like run_server.sh | tee /var/log/my-own.log when the pod starts... but then, to do it correctly, I'll have to add the whole logfile rotation functionality, basically duplicating what Kubernetes is already doing.
So there are a couple of ways to and scenarios for this. If you are just interested in the log of the same pod from before last restart, you can use the --previous flag to look at logs:
kubectl logs -f <pod-name-xyz> --previous
But since in your case, you are interested in looking at log files beyond one rotation, here is how you can do it. Add a sidecar container to your application container:
volumeMounts:
- name: varlog
mountPath: /tmp/logs
- name: log-helper
image: busybox
args: [/bin/sh, -c, 'tail -n+1 -f /var/log/*.log']
volumeMounts:
- name: varlog
mountPath: /tmp/logs
volumes:
- name: varlog
hpostPath: /var/log
This will allow the directory which has all logs from /var/log directory from host to /tmp/log inside the container and the command will ensure that content of all files is flushed. Now you can run:
kubectl logs <pod-name-abc> -c count-log-1
This solution does away with SSH access, but still needs access to kubectl and adding a sidecar container. I still think this is a bad solution and you consider of one of the options from the cluster level logging architecture documentation of Kubernetes such as 1 or 2

Is there any better way for changing the source code of a container instead of creating a new image?

What is the best way to change the source code of my application running as Kubernetes pod without creating a new version of image so I can avoid time taken for pushing and pulling image from repository?
You may enter the container using bash if it installed on the image and modify it using -
docker exec -it <CONTAINERID> /bin/bash
However, this isn’t advisable solution. If your modifications succeed, you should update the Dockerfile accordingly or else you risk losing your work and ability to share it with others.
Have the container pull from git on creation?
Setup CI/CD?
Another way to achieve a similar result is to leave the application source outside of the container and mount the application source folder in the container.
This is especially useful when developing web applications in environments such as PHP: your container is setup with your Apache/PHP stack and /var/www/html is configured to mount your local filesystem.
If you are using minikube, it already mounts a host folder within the minikube VM. You can find the exact paths mounted, depending on your setup, here:
https://kubernetes.io/docs/getting-started-guides/minikube/#mounted-host-folders
Putting it all together, this is what a nginx deployment would look like on kubernetes, mounting a local folder containing the web site being displayed:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
volumeMounts:
- mountPath: /var/www/html/
name: sources
readOnly: true
volumes:
- name: sources
hostPath:
path: /Users/<username>/<source_folder>
type: Directory
Finally we have resolved the issue. Here, we changed our image repository from docker hub to aws ecr in the same region where we are running kubernetes cluster. Now, it is taking very lesstime for pushing/pulling images.
This is definitely not recommended for production.
But if your intention is local development with kubernetes, take a look at these tools:
Telepresence
Telepresence is an open source tool that lets you run a single service
locally, while connecting that service to a remote Kubernetes cluster.
Kubectl warp
Warp is a kubectl plugin that allows you to execute your local code
directly in Kubernetes without slow image build process.
The kubectl warp command runs your command inside a container, the same
way as kubectl run does, but before executing the command, it
synchronizes all your files into the container.
I think it should be taken as process to create new images for each deployment.
Few benefits:
immutable images: no intervention in running instance this will ensure image run in any environment
rollback: if you encounter issues in new version, rollback to previous version
dependencies: new versions may have new dependencies

Ansible service restart does not work

I use Ansible in order to provision a Docker container with Vagrant.
My Ansible file contains the following section to start nginx:
- name: nginx started
service:
name: nginx
state: restarted
From what I understand, this section should restart nginx in all cases, but when I connect via SSH to the container, nginx is not running (the whole Ansible provision process succeeds, there is no log for nginx in /var/log/nginx/error.log). It starts correctly when I manually type the following command: sudo service nginx start.
However, it works when I replace the section above by:
- name: nginx restarted
command: service nginx restart
It seems the issue is not limited to nginx, and also happens with other services like syslog-ng.
Any idea why using Ansible service does not work ? (docker 17.10.0, Vagrant 2.0.1, Ansible 2.4.0)
Ansible service module try to guess the underlying init system.
In your case of the phusion/baseimage docker image, it finds /sbin/initctl, so the module simply launch /sbin/initctl stop nginx; /sbin/initctl start nginx inside the container which does nothing as the init system is changed in this image (my_init).
So the problem is the inconsistent init system state of the image that ansible doesn't detect correctly.
The solutions are:
write a my_init ansible module (service module try first to use the {{ ansible_service_mgr }} module [code])
remove initctl from the image, so ansible will not detect any init system and will use the service command (maybe raise an issue in phusion/baseimage-docker)
use the command module to explicitly use service command as you finally did
Please take a look into this ansible issue I think it may be related.
According to them:
...this is not service specific. The restarted/reloaded options are
currently simple options that call the underlying init systems
scripts, which (if successful) return success...
So, probably that's why even in Ansible documentation you can find an example of what you were trying to achieve:
# Example action to restart service httpd, in all cases
- service:
name: httpd
state: restarted
But for nginx doesn't work as it may not be supported.
Appart from your solution:
- name: nginx restarted
command: service nginx restart
You can achieve that by using:
- name: restart nginx
service: name=nginx state=restarted

Resources