Linking two applications in Cloud Foundry - grails

My application is split in two different applications (microservices?). One is deployed to CF as a docker container and another is deployed as a regular WAR file (grails application). The docker container is a Python/Flask application that exposes a REST API. The WebApp Uses this rest API.
Previously I had these applications as two separate docker containers and would execute them using the docker --link flag but now I'm looking to put these applications in Cloud Foundry. In the docker world I used to do the following.
docker run -d --link python_container -p 8080:8080 --name war_container war_image
The above command would create environment variables in war_container that would expose the ip address and port for the python_container. Using these environment variables I could communicate to the python_container from my WAR application. These environment variables would look like this:
PYTHON_CONTAINER_PORT_5000_TCP_ADDR=<ipaddr>
PYTHON_CONTAINER_PORT_5000_TCP_PORT=<port>
Question
Is it possible to do something similar in Cloud Foundry? I have the docker container already deployed in CF. How can I link the my WAR application such a way that the environment variables get created that get linked to the python_container as soon as I push the war file.
Currently I push the docker container to PCFDev using this:
cf push my-app -o 192.168.0.102:5002/my/container:latest
Then I push the war file using
cf push cf-sample -n cf-sample
The manifest.yml for cf-sample is:
---
applications:
- name: cfsample
memory: 1G
instances: 1
path: target/cf-sample-0.1.war
buildpack: java_buildpack
services:
- mysql
- rabbitmq

How would does the Docker way work if you had multiple instances of your Python app, and hence multiple IPs/ports?
For PCF, you could just have the Java app talk to your Python app "through the front door": https://my-python-app.local.pcfdev.io.
You can also try to discover your Python app's IP and port (see https://docs.run.pivotal.io/devguide/deploy-apps/environment-variable.html#CF-INSTANCE-IP for instance) and then pass these values to your Java app as environment variables, but this is a very brittle solution.
If you're interested in direct container-to-container networking, you might be interested in reading about and giving feedback on this proposal.

Related

Can Google's Container OS be used with gRPC on Compute Engine?

My high-level architecture is described in Cloud Endpoints for gRPC.
The Server below is a Compute Engine instance with Docker installed, running two containers (the ESP, and my server):
As per Getting started with gRPC on Compute Engine, I SSH into the VM and install Docker on the instance (see Install Docker on the VM instance). Finally I pull down the two Docker Containers (ESP and my server) and run them.
I've been reading around Container-Optimized OS from Google.
Rather than provisioning an instance with an OS and then installing Docker, I could just provision the OS with a Container-Optimized OS, and then pull-down my containers and run them.
However the only gRPC tutorials are for gRPC on Kubernetes Engine, gRPC on Kubernetes, and gRPC on Compute Engine. There is no mention of Container OS.
Has anyone used Container OS with gRPC, or can anyone see why this wouldn't work?
Creating an instance for advanced scenarios looks relevant because it states:
Use this method to [...] deploy multiple containers, and to use
cloud-init for advanced configuration.
For context, I'm trying to move to CI/CD in Google Cloud, and removing the need to install Docker would be a step in that direction.
You can basically follow almost the same instructions in the Getting started with gRPC on Compute Engine guide to deploy your gRPC server with the ESP on Container-Optimized OS. In your case, just see Container-Optimized OS as an OS with pre-installed Docker (there are more features but, in your case, only this one is interesting).
It is possible to use cloud-init if you want to automate the startup of your Docker containers (gRPC server + ESP) when the VM instance starts. The following cloud-init.cfg file automates the startup of the same containers presented in the documentation examples (with bookstore sample app). You can replace the Creating a Compute Engine instance part with two steps.
Create a cloud-init config file
Create cloud-init.cfg with the following content :
#cloud-config
runcmd:
- docker network create --driver bridge esp_net
- docker run
--detach
--name=bookstore
--net=esp_net
gcr.io/endpointsv2/python-grpc-bookstore-server:1
- docker run
--detach
--name=esp
--net=esp_net
--publish=80:9000
gcr.io/endpoints-release/endpoints-runtime:1
--service=bookstore.endpoints.<YOUR_PROJECT_ID>.cloud.goog
--rollout_strategy=managed
--http2_port=9000
--backend=grpc://bookstore:8000
Just after starting the instance, cloud-init will read this configuration and :
create a Docker network (esp_net)
run the bookstore container
run the ESP container. In this container startup command, replace <YOUR_PROJECT_ID> with your project ID (or replace the whole --service option depending on your service name)
Create a Compute Engine instance with Container-Optimized OS
You can create the instance from the Console, or via the command-line :
gcloud compute instances create instance-1 \
--zone=us-east1-b \
--machine-type=n1-standard-1 \
--tags=http-server,https-server \
--image=cos-73-11647-267-0 \
--image-project=cos-cloud \
--metadata-from-file user-data=cloud-init.cfg
The --metadata-from-file will populate user-data metadata with the contents of cloud-init.cfg. This cloud-init config will be taken into account when the instance will start.
You can validate this works by :
SSHing into instance-1, and running docker ps to see your running containers (gRPC server + ESP). You may experience some delay between the startup of your instance and the startup of both containers
calling your gRPC service with a client. For example (always with the bookstore application presented in the docs) :
INSTANCE_IP=$(gcloud compute instances describe instance-1 --zone us-east1-b --format="value(network_interfaces[0].accessConfigs.natIP)")
python bookstore_client.py --host $INSTANCE_IP --port 80 # returns a valid response
Note that you can also choose to not use cloud-init. You can directly run the docker run commands (the same as in cloud-init.cfg file) on your VM with Container-Optimized OS, exactly like you would do on any other OS.

Binding ports when running Docker images in Singularity

I am currently working on a distributed graph processing platform which maintains an Akka cluster inside of docker containers and have recently been granted access to a large cluster to test this. Unfortunately, this cluster does not run docker, only singularity.
This did not initially seem an issue as singularity supports docker images, however, due to the nature of the Akka cluster, I have to past several environment variables and bind several ports. As an example, a 'Partition Manager' within the system would be run with the following command:
docker run -p $PM0Port:2551 --rm -e "HOST_IP=$IP" -e "HOST_PORT=$PM0Port" -v $entityLogs:/logs/entityLogs $Image partitionManager $PM0ID $NumberOfPartitions $ZooKeeper
From looking through the Singularity documentation I can see that I can create a 'Singularity' file and specify the environment variables, but there doesn't seem to be any documentation on binding custom ports. Nor does it explain how I could pass arguments to the default entrypoint (The project is compiled with 'sbt docker:publish' so I am not sure exactly where this would be to reassign it).
Even if this was the solution, as there are multiple actor types (and several instances of each) it appears specifying environment variables and ports in a document would require templating, creating the files at run time, and building an image for each individual actor.
I am sure I have completely missed a page somewhere which would nicely translate this docker command into the equivalent singularity, but I just can't find it.
There is no network isolation in Singularity, so there is no need to map any port. If the process inside the container binds to an IP:port, it will be immediately reachable on the host.

How can I provide application config to my .NET Core Web API services running in docker containers?

I am using Docker to deploy my ASP.NET Core Web API microservices, and am looking at the options for injecting configuration into each container. The standard way of using an appsettings.json file in the application root directory is not ideal, because as far as I can see, that means building the file into my docker images, which would then limit which environment the image could run in.
I want to build an image once which can they be provided configuration at runtime and rolled through the dev, test UAT and into Production without creating an image for each environment.
Options seem to be:
Providing config via environment variables. Seems a bit tedious.
Somehow mapping a path in the container to a standard location on the host server where appsettings.json sits, and getting the service to pick this up (how?)
May be possible to provide values on the docker run command line?
Does anyone have experience with this? Could you provide code samples/directions, particularly on option 2) which seems the best at the moment?
It's possible to create data volumes in the docker image/container. And also mount a host directory into a container. The host directory will then by accessible inside the container.
Adding a data volume
You can add a data volume to a container using the -v flag with the docker create and docker run command.
$ docker run -d -P --name web -v /webapp training/webapp python app.py
This will create a new volume inside a container at /webapp.
Mount a host directory as a data volume
In addition to creating a volume using the -v flag you can also mount a directory from your Docker engine’s host into a container.
$ docker run -d -P --name web -v /src/webapp:/webapp training/webapp python app.py
This command mounts the host directory, /src/webapp, into the container at /webapp.
Refer to the Docker Data Volumes
We are using other packaging system for now (not docker itself), but still have same issue - package can be deployed in any environment.
So, the way we are doing it now:
Use External configuration management system to hold and manage configuration per environment
Inject to our package the basic environment variables to hold the configuration management system connection details
This way we are not only allowing the package to run in almost any "known" environment, but also run-time configuration management.
When you are running docker, you can use environment variable options of the run command:
$ docker run -e "deep=purple" ...

Establishing configurations in new containers

I am setting up a brand new development environment, nginx+php-fpm and decided to create application containers (using docker) for each service.
Normally I would install nginx and php and modify the configuration (with and editor like vim), reload the services until the services were correctly configured.
To establish a similar procedure starting the initial container and copying the /etc/nginx to the host. Modify the config files in the host and use a docker file (containing another COPY) to test the changes.
Given that the containers are somewhat ephemeral and aren't really meant to contain utilities like vim I was wondering how people set up the initial configuration ?
Once I have a working config I know the options with regards to configuration management for managing the files. It's really the establishment of new containers that I'm curious about.
You can pass configuration through environment variables or mount a host file as a data volume.
Example for nginx configuration:
docker run --name some-nginx -v /yoursystem/nginx.conf:/etc/nginx/nginx.conf:ro -d nginx
You can use many images from Docker Hub as starting point: nginx-php-fmp.

Packaging an app in docker that can be configured at run time

I have packaged a web app I've been working on as a docker image.
I want to be able to start the image with some configuration, like this is the url of the couchdb server to use, etc.
What is the best way of supplying configuration? My app relies on env variables can I set these at run time?
In addition to setting environment variables during docker run (using -e/--env and --env-file) as you already discovered, there are other options available:
Using --link to link your container to (for instance) your couchdb server. This will work if your server is also a container (or if you use an ambassador container to another server). Linking containers will make some environment variables available, including server IP and port, that your script can use. This will work if you only need to set references to services.
Using volumes. Volumes defined in the Dockerfile can be mapped to host folders, so you can use them to access configuration files, for instance. This is useful for very complex configurations.
Extending the image. You can create a new image based on your original and ADD custom configuration files or ENV entries. This is the least flexible option but is useful in complex configuration to simplify the launching, specially when the configuration is mostly static (probably a bad idea for services/hostnames, but can work for frameworks that can be configured differently for dev/production). Can be combined with any of the above.
It seems docker supports setting env variables - should have read the manual!
docker run -e MYVAR1 --env MYVAR2=foo --env-file ./env.list ubuntu bash
http://docs.docker.com/reference/commandline/cli/#run

Resources