I have a docker container in Openshift, this docker have a spring boot microservice that I want to execute only every X minutes.
How I can do it using Openshift?
I don't know how to create a cron job or similar to launch this microservice every X minutes.
Thanks!
Assuming you are exposing an http service, you may use a mix between cron in docker and curl with cron, you can configure a cron inside your docker container to send a curl request periodically, invoking your microservice.
Related
I installed gitlab and docker in Ubuntu. Now I need to install gitlab-runner using docker executor. Is it necessary for the gitlab to be running in docker or is it enough if both runs on the same machine?
GitLab Runner is the open source project that is used to run your jobs and send the results back to GitLab. So it just needs connectivity to Gilab and its done by registering the runner. Registering runner
Registering a Runner is the process that binds the Runner with a GitLab instance.
If you want to use Docker, GitLab Runner requires a minimum of Docker v1.13.0.
Allows to run:
Multiple jobs concurrently.
Use multiple tokens with multiple server (even per-project).
Limit number of concurrent jobs per-token.
Jobs can be run:
Locally.
Using Docker containers.
Using Docker containers and executing job over SSH.
Using Docker containers with autoscaling on different clouds and virtualization hypervisors.
Connecting to remote SSH server.
The GitLab Runner version should be in sync with the GitLab version, features may be not available or work properly if there’s a version difference.
I have deployed a few microservice in aws ecs(via CI/CD through jenkins) Each task has its own service and task definition. Apache is running on the foreground and the docker will be deployed from its service if the apache is crashed.
My Dev team is using RabbitMQ to communicate between microservice. few microservice needs to listen to the certain event in the RabbitMQ server (Rabbit is installed on a separate news ec2 instance)
For listening to the rabbit MQ server should we run the listener as a daemon? Dev team have asked to run the following code at the time of docker deployment so that it will listen to the rabbit mq server.
php /app/public/yii queue/listen
and set a cron job so that the listener will start when it is crashed.
For my best knowledge the docker container can run only one process in the foreground, Currently, Apache is running in the foreground. If I try to run the daemon (both the cron and the rabbit mq listener) in the background, The docker container won't restart when any of these daemon is crashed.
Is there any safer approach for this scenario? what is the best practice for running rabbit mq listener in docker container?
If your problem is to run more processes in a container, a more general concept is to create a script eg. start_service.sh in you container and execute that in the CMD directives of your docker file. like this:
#!/bin/bash
process1 ... &
process2 ... &
daemon-process1
sleep infinite
The & will make the script continue after starting a process in the background even if it is not aimed to run as a daemon. The sleep infinite in the end will prevent the script from exiting, which would exit the container.
If you run several processes within your container, consider using an "init" process like dumb-init in the container. Read more here https://github.com/Yelp/dumb-init/blob/master/README.md
When hosting applications on Heroku I often trigger one-off dynos via the Heroku API from within the code to do heavy lifting in the background. I recently set up some stacks on AWS and followed a similar pattern by using AWS ECS run task.
I am not using long running queue workers for this as hardware resources vary heavily according to the specific task and usually the workload occurs in peaks.
For local development, I usually skipped this topic by either executing the background tasks within the running container or triggering the background command manually from the console. What would be a good approach for running one-off containers locally?
ECS supports scheduled tasks, if you know when your peaks are planned for you can use scheduled tasks to launch fargate containers on a schedule.
If you don't, what we did was write a small API Gateway -> Lambda function that basically dynamically launches fargate containers with a few variables defined in the POST to the API Gateway endpoint like CPU/Mem/port etc...Or pre-create task definitions and just pass the task def to the api, which is another option if you know what the majority of your "settings" should be for the container.
You can simply call ECS RunTask API call from inside the container.
All you need is to setup ECS Task role to have runtask permissions and to have either aws cli or any aws sdk in container to call runtask call.
you can pass docker socket as a volume
volumes:
- /var/run/docker.sock:/var/run/docker.sock
After it you can run docker commands inside the container and they will be executed by docker on the host machine.
In particular you can run
docker run ...
or
docker start ...
(may be you will have to install docker in your container via commands in Dockerfile)
I installed Nginx ECS Docker container service through AWS ECS, which is running without any issue. However, every other container services such as centos, ubuntu, mongodb or postgres installed through AWS ECS keeps restarting (de-registering, re-registering or in pending state) in a loop. Is there a way to install these container services using AWS ECS without any issue on AMI Optimized Linux? Also, is there a way to register Docker containers in AWS ECS that was manually pulled and ran from Docker Hub?
Usually if a container is restarting over and over again its because its not passing the health check that you setup. MongoDB for example does not use the HTTP protocol so if you set it up as a service in ECS with an HTTP healthcheck it will not be able to pass the healthcheck and will get killed off by ECS for failing to pass the healthcheck.
My recommendation would be to launch such services without using a healthcheck, either as standalone tasks, or with your own healthcheck mechanism.
If the service you are trying to run does in fact have an HTTP interface and its still not passing the healthcheck and its getting killed then you should do some debugging to verify that the instance has the right security group rules to accept traffic from the load balancer. Additionally you should verify that the ports you define in your task definition match up with the port of the healthcheck.
Is it possible to postpone the startup of a container based the availability of a separate HTTP service. For example, only start the container if port 8080 is running?
That sort of application-level service check isn't available in docker-compose. You would need to implement the necessary logic in your docker images.
For example, if you have something that depends on a web service, you could have your CMD run a script that does something like:
while ! curl -sf http://servicehost:8080/; do
sleep 1
done
exec myprogram
Another option is to set a restart policy of always on your containers, and have them fail if the target service isn't available. Docker will continue to restart your container until it keeps running.