Is it possible to use docker as dev enviornment for golang app engine standard enviornment? - docker

My understanding is that docker needs App Engine Flexible Environment.
But I want to use Docker to create dev and local testing environment only, so that it will be easier and faster to replicate the environment on dev machines. I still want to deploy the GoLang app to App Engine Standard Environment. I am wondering if there is a way?

You can build custom runtimes using Docker, you will only need App Engine Flexible if you want to deploy them. In your case, as you want to deploy to App Engine Standard, I would recommend using the Development Server to simulate correctly the environment.

You cannot use Containers with App Engine Standard. You can use containers with App Engine Flexible.

Related

Multiple web apps with Docker architecture

I have multiple web apps, all of them running on Apache, many of them using PHP, MySQL, node, etc.
I'm not currently using Docker, but I would like to use it, and I would like to know what would be the best architectureto use.
I suppose that in my localhost I should create a container with Apache, and all the applications would be using it (am I wrong?). The same with MySQL if the application uses it.
But then, what happens when I want to deploy my projects (or some of them) into a production environment? I'm currently using Microsoft Azure WebApps, and I don't thing that my 'localhost' setup will be valid. I suppose that in production, each project should have its own Apache, but this changes my Docker setup, and I don't think this is the Docker philosophy.
So, how should I structure my architecture?

kubernetes development environment to reduce development time

I'm new to devops and kubernetes and was setting up the local development environment.
For having hurdle-free deployment, I wanted to keep the development environment as similar as possible to the deployment environment. So, for that, I'm using minikube for single node cluster, and that solves a lot of my problems but right now, according to my knowledge, a developer need to do following to see the changes:
write a code locally,
create a container image and then push it to container register
apply the kubernetes configuration with updated container image
But the major issue with this approach is the high development time, Can you suggest some better approach by which I can see the changes in real-time?
The official Kubernetes blog lists a couple of CI/CD dev tools for building Kubernetes based applications: https://kubernetes.io/blog/2018/05/01/developing-on-kubernetes/
However, as others have mentioned, dev cycles can become a lot slower with CI/CD approaches for development. Therefore, a colleague and I started the DevSpace CLI. It lets you create a DevSpace inside Kubernetes which allows you a direct terminal access and real-time file synchronization. That means you can use it with any IDE and even use hot reloading tools such as nodemon for nodejs.
DevSpace CLI on GitHub: https://github.com/covexo/devspace
I am afraid that the first two steps are practically mandatory if you want to have a proper CI/CD environment in Kubernetes. Because of the ephemeral nature of containers, it is strongly discouraged to perform hotfixes in containers, as they could disappear at any moment.
There are tools like helm or kubecfg that can help you with the third step
apply the kubernetes configuration with updated container image
They allow versioning and deployment upgrades. You would still need to learn how to use but they have innumerable advantages.
Another option that comes to mind (that without Kubernetes) would be to use development containers with Docker. In this kind of containers your code is in a volume, so it is easier to test changes. In the worst case you would only have to restart the container.
Examples of development containers (by Bitnami) (https://bitnami.com/containers):
https://github.com/bitnami/bitnami-docker-express
https://github.com/bitnami/bitnami-docker-laravel
https://github.com/bitnami/bitnami-docker-rails
https://github.com/bitnami/bitnami-docker-symfony
https://github.com/bitnami/bitnami-docker-codeigniter
https://github.com/bitnami/bitnami-docker-java-play
https://github.com/bitnami/bitnami-docker-swift
https://github.com/bitnami/bitnami-docker-tomcat
https://github.com/bitnami/bitnami-docker-python
https://github.com/bitnami/bitnami-docker-node
I think using Docker / Kubernetes already during development of a component is the wrong approach, exactly because of this slow development cycles. I would just develop as I'm used to do (e.g. running the component in the IDE, or a local app server), and only build images and start testing it in a production like environment once I have something ready to deploy. I only use local Docker containers, or our Kubernetes development environment, for running components on which the currently developed component depends: that might be a database, or other microservices, or whatever.
On the Jenkins X project we're big fans of using DevPods for fast development - which basically mean you compile/test/run your code inside a pod inside the exact same kubernetes cluster as your CI/CD runs using the exact same tools (maven, git, kubectl, helm etc).
This lets you use your desktop IDE of your choice while all your developers get to work using the exact same operating system, containers and images for development tools.
I do like minikube but developers often hit issues trying to get it running (usually related to docker or virtualisation issues). Plus many developers laptops are not big enough to run lots of services inside minikube and its always going to behave differently to your real cluster - plus then the developers tools and operating system are often very different to whats running in your CI/CD and cluster.
Here's a demo of how to automate your CI/CD on Kubernetes with live development with DevPods to show how it all works
It's not been so long for me to get involved in Kubernetes and Docker, but to my knowledge, I think it's the first step to learn whether it is possible and how to dockerize your application.
Kubernetes is not a tool for creating docker image and it is simply pulling pre-built image by Docker.
There are quite a few useful courses in the Udemy including this one.
https://www.udemy.com/docker-and-kubernetes-the-complete-guide/

Can I use Docker for production deployment of a Rails application?

I want to use Docker to deploy my Rails application. I want to know if there is someone tried this? And what problems can I face?
Deploying Rails apps to production with Docker is not only possible, but something you'd want to do, to make sure your app runs on any server you deploy.
This comes with some challenges. First, it's advisable to run your database server and your Rails app different containers to keep things isolated. You can also set up your production server Docker environment with Docker Machine. Machine allows you to configure AWS, Digital Ocean, Azure and Compute Engine instances (among many others), and manage your containers from your computer. I assume you're just getting started with Docker, so I suggest you take a look at this cool guide about setting up a Rails + Postgres app with Docker.

Docker images for application packaging

Apparently there seems to be two practices for application packaging and deployment
create a docker image and deploy it
build and deploy application from ground-up.
I am confused on how to use option 1). The premise is that you take a docker image and re-use it on any platform. But how is this a viable solution in practice, as an environment often has platform and application -specific configurations? The docker image from my test environment cannot be deployed to productions, as it contains mocks and test -level configurations.
The idea of packing an application as a Docker image is having all external/system configuration embedded in the application itself. i.e: any specific version of an external engine such as java or ruby; the basics GNU/Linux software you have in your system (not different versions of awk or grep any more), etc.
From my point of view, it is possible to have some slight differences between a develop and a production image, but this differences should be minor configuration parameters like log level or things like that. The advantage of using a container a distribution system of your app is to avoid all the pain related to the external differences, and also a new approach to the problem of 'web size architectures' and elastic platforms, having a new standard way to deploy them. Having some external services mocked in your test/development system should not be a problem, or if they are I think the problem is of the mock itself. The mock should be embedded in your application container, but you can have them as another image (or when possible avoid mocking the service and using it as a container).
Edit 1:
As general approach if you are using Docker as a tool helping with continuous integration or the deployment to production, I would not recommend having different containers for development and for production. If you have experience using IT automation tools such as Puppet, Chef, Ansible or Salt, they are an easy and probably fast way to configure your containers (and some as Chef has a docker specific approach, chef-container, which has some advantages here), and it's a good option to consider if you infrastructure is built using them.
But if you are building/designing a new architecture based on Docker I would check other options more decentralized and container-oriented as Consul or etcd, to manage the configurations templates and data, service discovery, elastic deployment with an orchestrator...

Linking containers together on production deploys

I want to migrate my current deploy to docker, it counts on a mongodb service, a redis service, a pg server and a rails app, I have created already a docker container for each but i have doubts when it comes to start and linking them. Under development I'm using fig but I think it was not meant to be used on production. In order to take my deployment to production level, what mechanism should I use to auto-start and link containers together? my deploy uses a single docker host that already runs Ubuntu so i can't use CoreOS.
Linknig containers in production is a tricky thing. It will hardwire the IP addresses of the dependent containers so if you ever need to restart a container or launch a replacement (like upgrading the version of mongodb) your rails app will not work out of the box with the new container and its new IP address.
This other answer explains some available alternatives to linking.
Regarding starting the containers, you can use any deployment tool to run the required docker commands (Capistrano can easily do that). After that, docker will restart running the containers after a reboot.
You might need a watcher process to restart containers if they die, just as you would have one for a normal rails app.
Services like Tutum and Dockerize.it can make this simpler. As far as I know, Tutum will not deploy to your servers. Dockerize.it will, but is very rough (disclaimer: I'm part of the team building it).
You can convert your fig configuration to CoreOS formatted systemd configuration files with fig2coreos. Google App Engine supports CoreOS, or you can run CoreOS on AWS or your cloud provider of choice. fig2coreos also supports deploying to CoreOS in Vagrant for local development.
CenturyLink (fig2coreos authors) have an example blog post here:
This blog post will show you how to bridge the gap between building
complex multi-container apps using Fig and deploying those
applications into a production CoreOS system.
EDIT: If you are constrained to an existing host OS you can use QEMU ("a generic and open source machine emulator and virtualizer") to host a CoreOS instance. Instructions are available from the CoreOS team.

Resources