Best portable way to connect from within a pod in a local dev kubernetes cluster to docker compose service - docker

I'm setting up a local development environment for a cloud native app where the idea is once in production up in Google Cloud, I'll be using Cloud SQL (managed cloud service) for data persistence. While I'm developing my application locally, I am using a local cluster with KinD, and would like my containers there to be able to reach a couple of external services outside the cluster (in this case PostgreSQL) and I'm doing it this way to keep dev/prod parity.
I have Postgres running locally using docker compose alongside my cluster, and while I can reach it already using the host's (my computer) IP + exposed port from within my pod containers, this is not very portable and would require every team member to configure their host IP to get their local environment working. I would like to avoid this.
Is there a better solution? Thanks.

I might have just written a blog post which could help...
https://medium.com/google-cloud/connecting-cloud-sql-kubernetes-sidecar-46e016e07bb4
It runs the Cloud SQL Proxy as a sidecar to the application. This way, only the deployment yaml would need to change with the --instances parameter for the Cloud SQL proxy to change from your local Postgres instance to the connection string for the Cloud SQL instance. You'll also need to sort the service account file in the deployment (covered in the blog post) so that you have the right permissions from your k8s deployment in GKE to access the Cloud SQL instance.

Related

How do you manage the variation between local and cloud dependencies within Docker?

I have a Docker image with an application server running in it.
When I'm running in a development environment, I want to run a database server within the same Docker image.
However, in production, I want to use my cloud provider's database service to host my database server.
What is the best (preferably officially supported) way to enable this distinction?
You Don't
You don't run the DB in the same container. You run it in a separate container next to your application container (Probably with docker-compose, but not required)
You run the same version as the cloud provider (or as close as you can get because they will no doubt configure it specifically for their env)

serverless framework: local kafka as event source

I'm trying to build a local development environment in order to make my local tests.
I need to use kafka as event-source.
I've deployed a self-managed cluster into my local environment using docker.
An issue is running in my mind according to documentacion, I need to provide authentication.
Here there's no problem, the issue is which kind of values documentation is required I provide, AWS secrets.
What do those kind of secret, AWS secrets, have to do with my self-managed self-deployed kafka cluster?
How could I provide my kafka cluster as local event source?
I mean, I thought I only need to provide bootstrap servers, consumer group and topic... Something like knative serverless documentation says.
Any ideas about how to connect to my local kafka?

Kubernetes - from Minikube to production

I have created a simple PHP api application that works with a mysql database to store data. I have been experimenting with Kubernetes on my Windows 10 machine through Minikube.
I have just about got my head round the ideas involved, yet I’m not sure about how to implement this properly. So far I have used Kompose to create a set of yaml files from an existing docker-compose file. This has been half successful.
To get my application code into a pod hosting PHP, I have been using hostPath to share from my local machine. I mount to the minikube machine and share from there. I was having trouble sharing by other means. The application code is hosted in a github repo.
My questions are:
Is mounting my application code into a pod (assuming this is similar to what happens in docker) the correct way to do this? I’m not clear exactly what information is held on an image retrieved from the docker hub. Although I have read up on containers isolating the build environment from your machine.
How does this approach to translate into a production environment hosted on a cloud? I see there are various storage types. I had for example, wanted to try deploying on AWS just to see how this would work in practice.
I’m really looking for guidance to go from the tutorials found on the web working on my machine, to something that could be done for a customer hosted on the cloud. This might scale up to a more microservices style architecture over time.
The approach you are describing is mostly for development setups, where you want to mount your code into the container as a volume so you don't have to rebuild every time your code changes. Typically done with a docker-compose file.
For production setups, you want the docker image to correctly work and only mount volumes to data you want to persist, typically databases are the core example. For this EKS is deeply integrated into the AWS infrastructure and will create EBS volumes on demand. You don't need to provision any volume or even care for most cases (unless you need multiple read-write volumes needed for scaling).
For a PHP application you really should not persist any data in the pod, because it will create other issues when you need to scale the application. Also, a good approach for managing files that need to persist is S3 (AWS simple storage service).
So generally speaking, you need a deployment per application a service to access each pod on that application and then an ingress object to route traffic from the internet to each pod.
Your application docker image is really the core. You just build it with your code inside. Make sure to pass configuration using environment variable or configuration file so you can connect to the database.
Now for kubernetes, for each compoment (e.g. PHP application, MySQL) you will most likely create a deployment k8s manifest that points to the docker image and add some configuration environment variables.
For production, you will need persistence volume. On aws you can simply use EBS-backed volumes
To get traffic from Internet to your PHP application, you will need to add one or more k8s components:
K8s Service manifest that exposes your PHP deployment/pod on a stable address. If you only have q or very few services, you can use LoadBalancer which on cloud like AWS will create an ALB/ELB (might need to add annotation to your service)
An ingress which is just a reverse proxy (contour, nginx, traefik). On cloud environment it will map to an ALB/ELB. The advantage of this is that you can have a single ALB for all your services i.e. save money. Also you can configure routing path or TLS termination in one place.

How can I define additional services in a Divio Cloud project using docker-compose?

I defined an additional service in my docker-compose.yml file in my Divio Cloud project.
Locally, it works just as expected. As well as the default web and db containers, I get my new container.
However, when I push this configuration to the Divio Cloud server, it's clearly not working at all, and I can't connect to the custom container.
In short
If you need an additional service in your project, you should configure it on the Divio Cloud, not in docker-compose.yml. docker-compose.yml is only used for local development purposes, and is ignored in deployment.
The longer answer
In Divio Cloud projects, docker-compose.yml is used to orchestrate all the services and containers in the local development environment only.
In the actual hosting environment, it's not used at all, and is simply ignored. Locally, your project has all the containers defined in the docker-compose.yml file - web, db and whatever else you define.
When your project is deployed on the hosting environment, only the web container is used.
The other containers are used locally for convenience, to replicate services that are part of the infrastructure.
For example, locally you have a db container running the Postgres database. In the cloud infrastructure, the web container connects to a Postgres cluster.
Similarly, if you have Celery in your cloud project, it will use backing services provided as part of the cloud infrastructure, but when you set up the same project locally, it will build them in new Docker containers.
More information at docker-compose.yml reference in the Divio Cloud Developer Handbook.
Note: I am a member of the Divio team. This question is one that we see quite regularly via our support channels.

How to deploy docker app using docker-compose.yml in cloud foundry

I have a docker-compose.yml file which have environment variable and certificates. I like to deploy these in cloud foundry dev version.
I want to deploy microgateway on cloud foundry link for microgateway is below-
https://github.com/CAAPIM/Microgateway
In cloud native world, you instantiate the services to your foundation beforehand. You can use prebuilt services (auto-scaler) available from the market place.
If the service you want is not available, you can install a tile (e.g redis, mysql, rabbitmq), which will add services to the market place. Lot of vendors provide tiles that can be installed on PCF (check on newtork.pivotal.io for the full list).
If you have services that are outside of cloud foundry (e.g. Oracle, Mongo, or MS Sql Server), and you wish to inject them into your cloud foundry foundation, you can create do that by creating User Provide Services (cups).
Once you have a service, you have to create a service instance. Think of it as provisioning a service for you. After you have provisioned i.e. created a service instance, then you can bind it to one or more apps.
A service instance is scoped to an org and a space. All apps within a org - space, can be bound to that service instance.
You deploy your app individually, by itself, to cloud foundry (jar, war, zip). You then bind any needed services to your app (e.g db, scaling, caching etc).
Use a manifest file to do all these steps in one deployment.
PCF 2.0 is introducing PKS - Pivotal Container Service. It is implementation of Kubo within PCF. It is still not GA.
Kubo, Kubernetes, and PKS allow you to deployed your containerized applications.
I have played with MiniKube and little bit of Kubo. Still getting my hands wet on PKS.
Hope this helps!

Resources