How to configure SCDF Skipper to use pre-existing docker instance? - docker

I'm currently evaluating the usage of Spring Cloud Data Flow for our infrastructure. We already use RabbitMQ and Kubernetes so that would be our target environment.
For local testing purposes I use dockerized MySQL and RabbitMQ and I want SCDF-Skipper to deploy the Stream-Services to my local docker instance so they can use the pre-existing MySQL and RabbitMQ-Containers (and I can manage and monitor everything in one single docker instance).
My first approach was to use Skipper and Dataflow Server from docker-compose but since I failed deploying something, I switched to use the jars following this tutorial:
https://dataflow.spring.io/docs/installation/local/manual/
By now, deployment of the stream works but fails to connect to my preexisting, dockerized MySQL. That is because by default SCDF Skipper seems to deploy to an internal Docker-Instance.
So my question is:
Is there any way to configure SCDF Skipper to use the Docker-Instance on my local machine as deployment-target?

After another iteration of research, I stumbled upon
https://dataflow.spring.io/docs/installation/local/docker/#docker-stream--task-applications
Apparently, to use Skipper and Dataflow-Server from within Docker (DooD, Docker-out-of-Docker), you have to add another docker-compose.yml.
That does NOT solve how to use a pre-existing docker-instance when running Skipper locally from jar, but at least it enables me to run them as a container on a pre-existing docker and thus lets it use it as deployment-target.

Related

How do you manage the variation between local and cloud dependencies within Docker?

I have a Docker image with an application server running in it.
When I'm running in a development environment, I want to run a database server within the same Docker image.
However, in production, I want to use my cloud provider's database service to host my database server.
What is the best (preferably officially supported) way to enable this distinction?
You Don't
You don't run the DB in the same container. You run it in a separate container next to your application container (Probably with docker-compose, but not required)
You run the same version as the cloud provider (or as close as you can get because they will no doubt configure it specifically for their env)

Best portable way to connect from within a pod in a local dev kubernetes cluster to docker compose service

I'm setting up a local development environment for a cloud native app where the idea is once in production up in Google Cloud, I'll be using Cloud SQL (managed cloud service) for data persistence. While I'm developing my application locally, I am using a local cluster with KinD, and would like my containers there to be able to reach a couple of external services outside the cluster (in this case PostgreSQL) and I'm doing it this way to keep dev/prod parity.
I have Postgres running locally using docker compose alongside my cluster, and while I can reach it already using the host's (my computer) IP + exposed port from within my pod containers, this is not very portable and would require every team member to configure their host IP to get their local environment working. I would like to avoid this.
Is there a better solution? Thanks.
I might have just written a blog post which could help...
https://medium.com/google-cloud/connecting-cloud-sql-kubernetes-sidecar-46e016e07bb4
It runs the Cloud SQL Proxy as a sidecar to the application. This way, only the deployment yaml would need to change with the --instances parameter for the Cloud SQL proxy to change from your local Postgres instance to the connection string for the Cloud SQL instance. You'll also need to sort the service account file in the deployment (covered in the blog post) so that you have the right permissions from your k8s deployment in GKE to access the Cloud SQL instance.

Gitlab CI with docker+machine - Using multiple containers to test app

I'm using Gitlab CI, configured with a docker+machine executor, to build and test my app on spot instances.
My main app requires a few microservices to be available on production as well as in the test step. All of these microservices are built and tested in the same Gitlab CI server (each in his own pipeline). The output of all microservices are docker images that are pushed to the Gitlab Docker Registry.
The test step I'm trying to build:
Provision a spot instance (if there's no idle one), installed with the microservice
docker
Test step
2.1. Provision a spot instance (if there's no idle one), installed with app docker
2.2. Testing script
2.3. Stop the app container, release the spot instance
Stops the microservice container, release the spot instance
I've got 2.1, 2.2, 2.3 to work by following the instructions here, but I'm not sure how to achieve the rest. I can run docker-machine explicitly in the yaml, but I'd like to use gitlab's docker+machine executor as it's configured with the credentials, limitations, offpeak settings, etc.
Is this possible to with gitlab's executor? How?
What's the "correct" way to go about doing something like this? I'm sure I'm not the first one testing with microservices but I couldn't find any info of how to do so.
You are probably looking for the CI Services functionality. They have a couple of examples of how to use a service (MySQL, PostgreSQL, Redis) or if you were using another docker image, the docker service will have the same hostname as the docker image name (eg, tutum/wordpress will have a dns hostname of tutum-wordpress and tutum__wordpress, for more info, refer to the details about hostnames).
There are also details about running the postgres in the shell executor if you were so inclined and there is a presentation on Testing things with Gitlab CI and docker.

How can I define additional services in a Divio Cloud project using docker-compose?

I defined an additional service in my docker-compose.yml file in my Divio Cloud project.
Locally, it works just as expected. As well as the default web and db containers, I get my new container.
However, when I push this configuration to the Divio Cloud server, it's clearly not working at all, and I can't connect to the custom container.
In short
If you need an additional service in your project, you should configure it on the Divio Cloud, not in docker-compose.yml. docker-compose.yml is only used for local development purposes, and is ignored in deployment.
The longer answer
In Divio Cloud projects, docker-compose.yml is used to orchestrate all the services and containers in the local development environment only.
In the actual hosting environment, it's not used at all, and is simply ignored. Locally, your project has all the containers defined in the docker-compose.yml file - web, db and whatever else you define.
When your project is deployed on the hosting environment, only the web container is used.
The other containers are used locally for convenience, to replicate services that are part of the infrastructure.
For example, locally you have a db container running the Postgres database. In the cloud infrastructure, the web container connects to a Postgres cluster.
Similarly, if you have Celery in your cloud project, it will use backing services provided as part of the cloud infrastructure, but when you set up the same project locally, it will build them in new Docker containers.
More information at docker-compose.yml reference in the Divio Cloud Developer Handbook.
Note: I am a member of the Divio team. This question is one that we see quite regularly via our support channels.

Feasibility of choosing EC2 + Docker as a production deployment option

I am trying to deploy my microservice in EC2 machine. I already launched my EC2 machine with Ubuntu 16.04 LTS AMI. And also I found that we can install Docker and run containers through Docker installation. Also I tried sample service deployment using Docker in my Ubuntu. I successfully run commands using -d option for running image in background also.
Can I choose this EC2 + Docker for deployment of my microservice for actual production environment? Then I can deploy all my Spring Boot microservice in this option.
I know that ECS is another option for me.To be frank trying to avoid ECR, ECS optimized AMI and its burdens, Looking for machine with full control that only belongs to me.
But still I need to know about the feasibility of choosing EC2 + Docker through my Ubuntu machine. Also I am planning to deploy my Angular 2 app. I don't need to install, deploy and manage any application server for both Spring Boot and Angular, since it will gives me about a serverless production environment.
What you are describing is a "traditional" single server environment and does not have much in common with a microservices deployment. However keep in mind that this may be OK if it is only you, or a small team working on the whole application. The microservices architectural style was introduced to be able to handle huge, complex applications with large development teams that require to scale out immensely due to fast business growth. Here an example story from Uber.
Please read this for more information about how and why the microservices architectural style was introduced as well as the benefits/drawbacks. Now about your question:
"Can I choose this EC2 + Docker for deployment of my microservice for actual production environment? "
Your question can be simply answered: You can, but it is probably not a good idea assuming you have a large enough project to require a microservices architecture.
You would have to implement all of the following deployment aspects yourself, which is typically covered by an orchestration system, like kubernetes:
Service Discovery and Load Balancing
Horizontal Scaling
Multi-Container Application Deployment
Container Health-Management / Self-Healing
Virtual Networking
Rolling Updates
Storage Orchestration
"Since It will gives me about a serverless production environment to
me."
EC2 is by definition not serverless, of course. You will have to maintain your EC2 instances, including OS updates, security patches etc. And if you only have a single server you will have service outages because of it.
You can do it. I have had Docker on standard EC2 instances running without problem. By "my microservice" you mean a single microservice, right?
You don't need service discovery or routing rules?
Can I choose this EC2 + Docker for deployment of my microservice for actual production environment?
Yes, this is totally possible, although I suggest using kubernetes as the container-orchestrator as it manages the lifecycle of the containers for you:
Running Kubernetes on AWS EC2
Amazon Elastic Container Service for Kubernetes
Manage Kubernetes Clusters on AWS Using Kops
Amazon EKS

Resources