How to call Redis inside Kubernetes? Problems removing Old Redis service - docker

Previously I had been experimenting with this command on Docker for Desktop Kubernetes
helm install my-release --set password=password bitnami/redis
I had issued the command helm uninstall my-release.
Now I am trying to make my todolistclient work inside (Docker for Desktop) Kubernetes with redis:
kubectl run redis --image=bitnami/redis:latest --replicas=1 --port=6379 --labels="ver=1,app=todo,env=proto" --env="REDIS_PASSWORD=password" --env="REDIS_REPLICATION_MODE=master" --env="REDIS_MASTER_PASSWORD=password"
kubectl run todolistclient --image=siegfried01/todolistclient:latest --replicas=3 --port=5000 --labels="ver=1,app=todo,env=proto"
When I look at the log for ToDoListClient, I see a stack trace indicating that it is failing to connect to the redis server with this error message:
System.AggregateException: One or more errors occurred. (No connection is available to service this operation: EVAL; SocketFailure on my-release-redis-master.default.svc.cluster.local:6379/Subscription, origin: Error, input-buffer: 0, outstanding: 0, last-read: 0s ago, last-write: 0s ago, unanswered-write: 9760s ago, keep-alive: 60s, pending: 0, state: Connecting, last-heartbeat: never, last-mbeat: -1s ago, global: 0s ago)
What is this my-release-redis-master.default.svc.cluster.local? This has been uninstalled and I'm not running that any more.
My C# code is connecting to Redis with
.AddDistributedRedisCache(options => { options.InstanceName = "OIDCTokens"; options.Configuration = "redis,password=password"; })
Just to be certain that I was indeed using the above code and specifically "redis", I recompiled my code and pushed to DockerHub again and I am getting the same error again.
So apparently there is something left over from the helm version of redis that is translating "redis" into "my-release-redis-master". How do I remove this so I can connect to my current redis?
Thanks
Siegfried

In the todolistclient application you are using my-release-redis-master.default.svc.cluster.local:6379/Subscription. This is the url of a service exposing redis pod. This is automatically created by helm release.
If that is not desired then you need to change that url in todolistclient application to your redis service.
You have deployed redis but have not created any service to expose redis, Hence you can not use a service url to connect to it unless you create it.
So you have two options
Use redis pod IP in the todolistclient application. This is not recomended because Pod IP changes when restarted.
Create a service and then use that service url in todolistclient application.
apiVersion: v1
kind: Service
metadata:
name: redis-master
labels:
run: redis
spec:
ports:
- port: 6379
targetPort: 6379
selector:
run: redis
Here is a guide on how to deploy a guestbook application on kubernetes and connect to redis.
One suggestion don't use same labels for both todolistclient and redis

The problem was that I had originally changed my source code to accommodate the name generated by helm: my-release-redis-master and later restored the code to just use the domain name redis.
The confusion was from the fact that even though I was intending to compile and deploy (to Kubernetes) a debug version (which is the setting I had for Visual Studio), Visual studio was continuing to recompile the debug version but deploying that ancient release version with the bad domain name.
The GUI for the Visual Studio 2019 publish dialog apparently is broken and won't let you deploy in debug mode. (I wish I could find the file where that publish dialog stored its settings so I could correct it with notepad). It would have been nice if I had received a warning indicating that it was not deploying my latest build.
Arghya Sadhu's response was helpful because it gave me the confidence to say that this was not some weird feature of Kubernetes that causing my domain name to translated to the bogus my-release-redis-master.
Thank you Arghya.
So the solution was simple: recompile in release mode and deploy.
Siegfried

Related

Docker DinD fails to build images after upgrading

We are using Docker 18.9.8-dind. DinD — Docker-in-Docker — is running Docker in a separate container. This way, we send requests to this container to build our images, instead of executing Docker in the machine that wants the built image.
We needed to upgrade from 18.9.8-dind to 20.10.14-dind. Since we use Kubernetes, we just updated the image version in some YAML files:
spec:
containers:
- name: builder
- image: docker:18.09.8-dind
+ image: docker:20.10.14-dind
args: ["--storage-driver", "overlay2", "--mtu", "1460"]
imagePullPolicy: Always
resources:
Alas, things stopped working after that. Builds failed, and we could find these error messages in the code reaching for our Docker builder:
{"errno":-111,"code":"ECONNREFUSED","syscall":"connect","address":"123.456.789.10","port":2375}
Something went wrong and the entire build was interrupted due to an incorrect configuration file or build step,
check your source code.
What can be going on?
We checked the logs in the Docker pod, and found this message at the end:
API listen on [::]:2376
Well, our message in the question states we tried to connect to port 2375, which used to work. Why has the port changed?
Docker enables TLS as default from version 19.03 onwards. When Docker uses TLS, it listens on port 2376.
We had three alternatives here:
change the port to 2375 (which sounds like a bad idea: we would use the default plain port for TLS communication, a very confusing setup);
Connect to the new port; or
disable TLS.
In general, connecting to the new port is probably the best solution. However, for reasons specific to us, we choose to disable TLS, which only requires an environment variable in yet another YAML file:
- name: builder
image: docker:20.10.14-dind
args: ["--storage-driver", "overlay2", "--mtu", "1460"]
+ env:
+ - name: DOCKER_TLS_CERTDIR
+ value: ""
imagePullPolicy: Always
resources:
requests:
In most scenarios, though, it is probably better to have TLS enabled and change the port in the client.
(Sharing in the spirit of Can I answer my own questions? because it took us some time to piece the parts together. Maybe by sharing this information together with the error message, things can be easier for other affected people to find.)

How to make my application pods to run after the database pods with helm 3

I'm fairly new with helm and I have a few basic questions. I am deploying a RoR application with Helm 3 and I'm using postgresql as the database. I have added the database as a dependency to the application and I'm using the bitnami postgres helm chart for this purpose. When i deploy my application chart, both the application and the database pods gets deployed. however, the application pods start running before the database pods. The application requires that database should be running in the background before the migrations happen. but as the application starts before the databse, the migrations fails and the database pods crashes due to which the application pods also crashes.
I want my application pods to wait for the database pods to start running. How can i do this using helm?
You can define initContainer in your chart, which will check database availability (and will wait for it to start). Your application will start only if initContainer exited successfully.
Postgresql has a handy pg_isreadyutility, used to check if database is ready to accept requests, which can be used just like that:
initContainers:
- name: check-db-ready
image: postgres:9.6.5
command: ['sh', '-c',
'until pg_isready -h postgres -p 5432;
do echo waiting for database; sleep 2; done;']
In this example initContainer will check service postgres on port 5432 for readiness each 2 seconds, and will successfully terminate when exit code of pg_isready utility will be 0
See this blog post: https://medium.com/#xcoulon/initializing-containers-in-order-with-kubernetes-18173b9cc222

Container failed to start. Failed to start and then listen on the port defined by the PORT environment variable

I built my container image, but when I try to deploy it from the gcloud command line or the Cloud Console, I get the following error: "Container failed to start. Failed to start and then listen on the port defined by the PORT environment variable."
In your code, you probably aren't listening for incoming HTTP requests, or you're listening for incoming requests on the wrong port.
As documented in the Cloud Run container runtime contract, your container must listen for incoming HTTP requests on the port that is defined by Cloud Run and provided in the $PORT environment variable.
If your container fails to listen on the expected port, the revision health check will fail, the revision will be in an error state and the traffic will not be routed to it.
For example, in Node.js with Express, you should use :
const port = process.env.PORT || 8080;
app.listen(port, () => {
console.log('Hello world listening on port', port);
});
In Go:
port := os.Getenv("PORT")
if port == "" {
port = "8080"
}
log.Fatal(http.ListenAndServe(fmt.Sprintf(":%s", port), nil))
In python:
app.run(port=int(os.environ.get("PORT", 8080)),host='0.0.0.0',debug=True)
One of the other reason may be the one which I observed. Docker images may not have the required code to run the application.
I had a Node application written in TypeScript. In order to dockerize the application all I need to do is compile the code tsc and run docker build but I though that gcloud builds submit will be taking care of that and picking the compiled code as the Dockerfile suggested in conjunction to the .dockerignore and will build my source code and submit to the repository.
But what all it did was to copy my source code and submitted to the Cloud Build and there as per the Dockerfile it dockerized my source code as compared to dockerizing the compiled code.
So remember to include a build step in Dockerfile if you are doing a source code in a language with require compilation.
Remember that enabling the build step in the Dockerfile will increase the image size every time you do a image push to the repository. It is eating the space over there and google is going to charge you for that.
Another possibility is that the docker image ends with a command that takes time to complete. By the time deployment starts the server is not yet running and the health check will hit a blank.
What kind of command would that be ? Usually any command that runs the server in dev mode. For Scala/SBT it would be sbt run or in Node it would be something like npm run dev. In short make sure to run only on the packaged build.
I was exposing a PORT in dockerfile , remove that automatically fixed my problem. Google injects PORT env variable so the project will pick up that Env variable.
We can also specify the port number used by the image from the command line.
If we are using Cloud Run, we can use the following:
gcloud run deploy --image gcr.io/<PROJECT_ID>/<APP_NAME>:<APP_VERSION> --max-instances=3 --port <PORT_NO>
Where
<PROJECT_ID> is the project ID
<APP_NAME> is the app name
<APP_VERSION> is the app version
<PORT_NO> is the port number
The Cloud Run is generating default yaml file which has hard-coded default port in it:
spec:
containerConcurrency: 80
timeoutSeconds: 300
containers:
- image: us.gcr.io/project-test/express-image:1.0
ports:
- name: http1
containerPort: 8080
resources:
limits:
memory: 256Mi
cpu: 1000m
So, we need to expose the same 8080 port or change the containerPort in yaml file and redeploy.
Here is more about that:
A possible solution could be:
build locally
push the image on google cloud
deploy on google run
With commands:
docker build -t gcr.io/project-name/image-name
docker push gcr.io/project-name/image-name
gcloud run deploy tag-name --image gcr.io/project-name/image-name

How to connect to cloud sql from containered application in gcloud?

I'm using GCloud, I have a kubernate cluster and a cloud sql instance.
I have a simple node.js app, that uses database. When I deploy with gcloud app deploy it has an access to a database. However, when I build a dockerimage and expose it, it cannot reach database.
I expose Docker application following: https://cloud.google.com/kubernetes-engine/docs/tutorials/hello-app
Cloud SQL deosn't have Private IP enabled, Im connecting using cloud sql proxy
In app.yaml I do specify base_settings:cloud_sql_instances. I use the same value in socketPath config for mysql connection.
The error in docker logs is:
(node:1) UnhandledPromiseRejectionWarning: Error: connect ENOENT /cloudsql/x-alcove-224309:europe-west1:learning
at PipeConnectWrap.afterConnect [as oncomplete] (net.js:1097:14)
Can you please explain me how to connect to cloud sql from dockerized node application.
When you deploy your app on App Engine with gcloud app deploy, the platform runs it in a container along with a side-car container in charge of running the cloud_sql_proxy (you ask for it by specifying the base_settings:cloud_sql_instances in your app.yaml file).
Kubernetes Engine doesn't use an app.yaml file and doesn't supply this side-car container to you so you'll have to set it up. The public doc shows how to do it by creating secrets for your database credentials and updating your deployment file with the side-car container config. An example shown in the doc would look like:
...
- name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.11
command: ["/cloud_sql_proxy",
"-instances=<INSTANCE_CONNECTION_NAME>=tcp:3306",
"-credential_file=/secrets/cloudsql/credentials.json"]
securityContext:
runAsUser: 2 # non-root user
allowPrivilegeEscalation: false
volumeMounts:
- name: cloudsql-instance-credentials
mountPath: /secrets/cloudsql
readOnly: true
...
Generally, the best method is to connect using a sidecar container inside the same pod as your application. You can find examples on the "Connecting from Google Kubernetes Engine" page here. There is also a codelab here that goes more in-depth and might be helpful.
The documentation mentions that it is possible to connect using an internal IP address.
Did somebody try it?

Is there any better way for changing the source code of a container instead of creating a new image?

What is the best way to change the source code of my application running as Kubernetes pod without creating a new version of image so I can avoid time taken for pushing and pulling image from repository?
You may enter the container using bash if it installed on the image and modify it using -
docker exec -it <CONTAINERID> /bin/bash
However, this isn’t advisable solution. If your modifications succeed, you should update the Dockerfile accordingly or else you risk losing your work and ability to share it with others.
Have the container pull from git on creation?
Setup CI/CD?
Another way to achieve a similar result is to leave the application source outside of the container and mount the application source folder in the container.
This is especially useful when developing web applications in environments such as PHP: your container is setup with your Apache/PHP stack and /var/www/html is configured to mount your local filesystem.
If you are using minikube, it already mounts a host folder within the minikube VM. You can find the exact paths mounted, depending on your setup, here:
https://kubernetes.io/docs/getting-started-guides/minikube/#mounted-host-folders
Putting it all together, this is what a nginx deployment would look like on kubernetes, mounting a local folder containing the web site being displayed:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
volumeMounts:
- mountPath: /var/www/html/
name: sources
readOnly: true
volumes:
- name: sources
hostPath:
path: /Users/<username>/<source_folder>
type: Directory
Finally we have resolved the issue. Here, we changed our image repository from docker hub to aws ecr in the same region where we are running kubernetes cluster. Now, it is taking very lesstime for pushing/pulling images.
This is definitely not recommended for production.
But if your intention is local development with kubernetes, take a look at these tools:
Telepresence
Telepresence is an open source tool that lets you run a single service
locally, while connecting that service to a remote Kubernetes cluster.
Kubectl warp
Warp is a kubectl plugin that allows you to execute your local code
directly in Kubernetes without slow image build process.
The kubectl warp command runs your command inside a container, the same
way as kubectl run does, but before executing the command, it
synchronizes all your files into the container.
I think it should be taken as process to create new images for each deployment.
Few benefits:
immutable images: no intervention in running instance this will ensure image run in any environment
rollback: if you encounter issues in new version, rollback to previous version
dependencies: new versions may have new dependencies

Resources