Openshift restart the container on extact time and execute prescript execution - docker

I have a requirement to restart the Opensift container on everday 10 AM and execute a pre-script to perform file downloads.
Since my flask app is running on gunicorn 4 worker, i cannot place the file download logic on the flask application as it execute download logic on 4 times.
i.e. I have python flask app running in the Openshift which utilizes the 10 files (dynamic file with daily update). So case here is,
download the new files on daily 10 AM
Once new files are downloaded, restart the container. (restarted app will use the new files as it is arrived in the Persistent volume)
Please suggest how smart we can achieve it using liveness/readiness probe or any other way please suggest.

You can setup the NFS filesystem with ReadWriteMany access mode which will be shared across the 4 replicas of POD if you are running.
If you are running the 4 gunicorn workers inside the 1 POD backed by the single PVC. you can create an API or so to download a file, by invoking API you can simply update the downloaded file to the PVC file system.
To Restart the POD at 10 AM you can take the help of Kubernetes cronjobs, which will invoke your API endpoint to download files and the second cronjob at 10:15 AM, or after a few min of delay cronjob will restart the deployment.
Cronjobs : https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/
You can create the cronjob to run at 10 AM and restart the specific deployment or POD as per need.
Extra :
gunicorn is not required generally with K8s, you can multi-process, and scaling should be taken care by K8s itself HPA & VPA.
Python K8s client also available so using python also you can restart the deployment.

Related

Using Transfer for on-premises option to transfer files

[google-cloud-storage]I am trying to copy files from Linux directory to GCP bucket using "Transfer for on-premises" option. I’ve installed docker script on Linux and GCP bucket is created. I now need to run Docker Run command to copy files. My question is how do I specify the source & target places in the docker command. For example;
Sudo docker run –source –target --hostname=$(hostname) --agent-id-prefix=ID123456789
The short answer is you can't supply a source/destination to this command, because its purpose is not to transfer the data. This command starts the agents for the service - agents are always-running processes that help you move data.
After starting agents that have access to your files, you issue a copy command in the Cloud Console, where you can specify a source directory and target bucket+prefix. When you do this, the service will contact the agents and use them to push the data to Google Cloud in parallel, for faster transfers. See the following links for more details:
Overview of how Transfer Service for on-premises data works
Setting up the service, and how to submit a transfer job

Slow install / upgrade through Helm (for Kubernetes)

Our application consists of circa 20 modules. Each module contains a (Helm) chart with several deployments, services and jobs. Some of those jobs are defined as Helm pre-install and pre-upgrade hooks. Altogether there are probably about 120 yaml files, which eventualy result in about 50 running pods.
During development we are running Docker for Windows version 2.0.0.0-beta-1-win75 with Docker 18.09.0-ce-beta1 and Kubernetes 1.10.3. To simplify management of our Kubernetes yaml files we use Helm 2.11.0. Docker for Windows is configured to use 2 CPU cores (of 4) and 8GB RAM (of 24GB).
When creating the application environment for the first time, it takes more that 20 minutes to become available. This seems far to slow; we are probably making an important mistake somewhere. We have tried to improve the (re)start time, but to no avail. Any help or insights to improve the situation would be greatly appreciated.
A simplified version of our startup script:
#!/bin/bash
# Start some infrastructure
helm upgrade --force --install modules/infrastructure/chart
# Start ~20 modules in parallel
helm upgrade --force --install modules/module01/chart &
[...]
helm upgrade --force --install modules/module20/chart &
await_modules()
Executing the same startup script again later to 'restart' the application still takes about 5 minutes. As far as I know, unchanged objects are not modified at all by Kubernetes. Only the circa 40 hooks are run by Helm.
Running a single hook manually with docker run is fast (~3 seconds). Running that same hook through Helm and Kubernetes regularly takes 15 seconds or more.
Some things we have discovered and tried are listed below.
Linux staging environment
Our staging environment consists of Ubuntu with native Docker. Kubernetes is installed through minikube with --vm-driver none.
Contrary to our local development environment, the staging environment retrieves the application code through a (deprecated) gitRepo volume for almost every deployment and job. Understandibly, this only seems to worsen the problem. Starting the environment for the first time takes over 25 minutes, restarting it takes about 20 minutes.
We tried replacing the gitRepo volume with a sidecar container that retrieves the application code as a TAR. Although we have not modified the whole application, initial tests indicate this is not particularly faster than the gitRepo volume.
This situation can probably be improved with an alternative type of volume that enables sharing of code between deployements and jobs. We would rather not introduce more complexity, though, so we have not explored this avenue any further.
Docker run time
Executing a single empty alpine container through docker run alpine echo "test" takes roughly 2 seconds. This seems to be overhead of the setup on Windows. That same command takes less 0.5 seconds on our Linux staging environment.
Docker volume sharing
Most of the containers - including the hooks - share code with the host through a hostPath. The command docker run -v <host path>:<container path> alpine echo "test" takes 3 seconds to run. Using volumes seems to increase runtime with aproximately 1 second.
Parallel or sequential
Sequential execution of the commands in the startup script does not improve startup time. Neither does it drastically worsen.
IO bound?
Windows taskmanager indicates that IO is at 100% when executing the startup script. Our hooks and application code are not IO intensive at all. So the IO load seems to originate from Docker, Kubernetes or Helm. We have tried to find the bottleneck, but were unable to pinpoint the cause.
Reducing IO through ramdisk
To test the premise of being IO bound further, we exchanged /var/lib/docker with a ramdisk in our Linux staging environment. Starting the application with this configuration was not significantly faster.
To compare Kubernetes with Docker, you need to consider that Kubernetes will run more or less the same Docker command on a final step. Before that happens many things are happening.
The authentication and authorization processes, creating objects in etcd, locating correct nodes for pods scheduling them and provisioning storage and many more.
Helm itself also adds an overhead to the process depending on size of chart.
I recommend reading One year using Kubernetes in production: Lessons learned. Author goes into explaining what have they achieved by switching to Kubernetes as well differences in overhead:
Cost calculation
Looking at costs, there are two sides to the story. To run Kubernetes, an etcd cluster is required, as well as a master node. While these are not necessarily expensive components to run, this overhead can be relatively expensive when it comes to very small deployments. For these types of deployments, it’s probably best to use a hosted solution such as Google's Container Service.
For larger deployments, it’s easy to save a lot on server costs. The overhead of running etcd and a master node aren’t significant in these deployments. Kubernetes makes it very easy to run many containers on the same hosts, making maximum use of the available resources. This reduces the number of required servers, which directly saves you money. When running Kubernetes sounds great, but the ops side of running such a cluster seems less attractive, there are a number of hosted services to look at, including Cloud RTI, which is what my team is working on.

Run Docker with old computer

I can't run docker (linux) containers on my PC - it is too slow for that. Is there other ways of running/developing/testing docker containers in a similar way to doing it on my PC? Maybe some browser app? Or the only option is to simply host a VM somewhere like DigitalOcean or AWS?
Use AWS ECS. AWS gives you a free trial for 12 months where they give you some free stuff like free server hours and so on. After you create a new AWS account, go to AWS ECS service and then go to the repository section and create a new repository item by uploading your Docker image. Now go to the tasks section and create a task configuration for your docker image like memory, port and so on. After then, create a new service and assign it to the task that you just created and run it.
This process is straightforward and it will take you around an hour to finish all.
Follow the steps in this video: https://www.youtube.com/watch?v=1wLMLwjCqN4

Kubernetes: How do I deploy container from saved checkpoint?

I am using experimental checkpoint feature to start up my app in the container and save its state.
I do so because tests on the app cannot be run in pararell and startup takes long.
I want to migrate to kubernetes to manage test containers
Build and start up an app in the container
Save state
Spin X instances from saved container
Run one test on each container
How do I use Kubernetes to do that?
I uses GCP
Container state migration (CRIU) is a feature that Docker has in a experimental state. According to Kubernetes devs (https://github.com/kubernetes/kubernetes/issues/3949), looks like it is not something Kubernetes will support in the short term. Therefore, you currently cannot migrate pods with checkpoints (i.e. it will need to start again). Not sure if creating a container image of your started application could help, that would depend on how the container image was created.

Restart task in docker service after a certain time

I have a swarm with 3 nodes. On it, I want to launch one service for a Database and then another, with some replicas that run a python application. The program will take approximately 30 minutes to finish. After that, the container is shut down and a new one starts. Sometimes, however, some problem occur and the container does not stop. Is there any option that I can use when I launch the service so that, after 1 hour, a container is automatically killed and a new one is created?
You can create an application using the Docker Remote API, that automatically creates that container, waits for one hour, deploys it to the swarm and then deletes that container. This is not a feature to look for in docker. You should manually implement it using Docker API.
You can find in here complete list of docker libraries to help you get started.

Resources