I run Jenkins and my app is dockerized, i.e. when I run the container it exposes port 3000 and I can point my browser there. On every Github PR I would like to deploy that git commit to a running container somewhere and have Jenkins post back to the PR the link where it can be accessed. On any PR updates it gets auto re-deployed and on PR close/resolve it gets torn down.
I have looked at kubernetes and a little rancher, but what's the easiest way to get this going assuming I can only deploy to one box?
There is a jenkins plugin github-pullrequest can resolve your problem.
Prerequisites:
You have a jenkins server can access by internet if you want trigger your build by a webhook.
Your have a github API token to access/admin your git repository, it can be generate by yourself in settings.
Please follow the guide configuration to setup your jenkins integration with github.
After configuration:
you can trigger your build by PR events: opened/commit changed/closed, or comment with specific pattern.
you can get a PR status via environment variable ${GITHUB_PR_STATE}, so you can start or stop a container on specific value.
you can publish a comment to a PR to tell the address of your web service after you started docker container.
About expose port of cotainer with multi PR, your can just run container with -p 3000, it will auto expose a port in a range on the docker host, docker port <container> will show the specific port number, so for example:
container1 with address <host>:32667 for PR1
container2 with address <host>:35989 for PR2
I think the simplest solution to this would be to create two different Jenkins Jobs, one which deploys and the other which nukes it. The trigger for this can be set 2 webhooks in GitHub one for PR create and one for PR resolve.
As Sylvain GIROD pointed out:
With only one box to run the application you need to change the port that is exposed. When a GitHub PR happens you deploy your application (docker run -p newport:containerport). If you are deploying services you change the target port.
Then you send the link with this port back to the user (email?).
Additionally you need some key-value store to remember which pod was created for which users so that you can decide on a new PR whether to destroy the old containers.
I would also suggest giving the services a time to live and regularly cleaning up stale containers/services.
Related
I am seeing in my stackdriver logs that I am running an old revision of a cloud run service, while the console shows only one newer revision as live.
In this case I have a cloud run container built in a GCP project, but which is deployed in a second project, using the fully specified image name. (My attempts at Terraform were sidetracked into an auth catch 22, so until I have restored determination, I am manually deploying.) I don't know if this wrinkle of the two projects is relevant to my question. Otherwise all the auth works.
In brief, why may I be seeing old deployments receiving traffic minutes after the new deployment is made? Even more than 30 minutes later traffic is still reaching the old deployment.
There are a few things to take into account here:
Try explicitly telling Cloud Run to migrate all traffic to the latest revision. You can do that by adding the following to your .yaml file
metadata:
name: SERVICE
spec:
...
traffic:
- latestRevision: true
percent: 100
Try always adding :latest tag upon the building of a new image
so instead of only having let's say gcr.io/project/newimage it would be gcr.io/project/newimage:latest. This way you will ensure the latest image is being used and not previously automatically assigned tags.
If neither fix your issue, then please provide the logs as there might be something useful that indicates what is the root cause. (Also let us know if you are using any caching config)
You can tell Cloud Run to route all traffic to the latest instance with gcloud run services update-traffic [SERVICE NAME] --to-latest. That will route all traffic to the latest deployment, and update the traffic allocation as you deploy new instances.
You might not want to use this if you need to validate the service after deployment and before "opening the floodgates", or if you're doing canary deployments.
I'm using Gitlab CI, configured with a docker+machine executor, to build and test my app on spot instances.
My main app requires a few microservices to be available on production as well as in the test step. All of these microservices are built and tested in the same Gitlab CI server (each in his own pipeline). The output of all microservices are docker images that are pushed to the Gitlab Docker Registry.
The test step I'm trying to build:
Provision a spot instance (if there's no idle one), installed with the microservice
docker
Test step
2.1. Provision a spot instance (if there's no idle one), installed with app docker
2.2. Testing script
2.3. Stop the app container, release the spot instance
Stops the microservice container, release the spot instance
I've got 2.1, 2.2, 2.3 to work by following the instructions here, but I'm not sure how to achieve the rest. I can run docker-machine explicitly in the yaml, but I'd like to use gitlab's docker+machine executor as it's configured with the credentials, limitations, offpeak settings, etc.
Is this possible to with gitlab's executor? How?
What's the "correct" way to go about doing something like this? I'm sure I'm not the first one testing with microservices but I couldn't find any info of how to do so.
You are probably looking for the CI Services functionality. They have a couple of examples of how to use a service (MySQL, PostgreSQL, Redis) or if you were using another docker image, the docker service will have the same hostname as the docker image name (eg, tutum/wordpress will have a dns hostname of tutum-wordpress and tutum__wordpress, for more info, refer to the details about hostnames).
There are also details about running the postgres in the shell executor if you were so inclined and there is a presentation on Testing things with Gitlab CI and docker.
My project is structured in such a way that the build job in Jenkins is triggered from a push to Git. As part of my application logic, I spin up kafka and elastic search instances to be used in my test cases downstream.
The issue I have right now is, when a developer pushes his changes to Git, it triggers a build in Jenkins which in turn runs our code and spawns kafka broker in localhost:9092 and elastic search in localhost:9200.
When another developer working on some other change simultaneously, pushes his code, it triggers the build job again and tries to spin up another instance of kafka/elastic search but fails with the exception “Port already in use”.
I am looking at options on how to handle this scenario.
Will running these instances inside of docker container help to some extent? How do I handle the port issue in that case?
Yes dockerizing these instances can indeed help as you can spawn them multiple times.
You could create a docker container per component including your application and then let them talk to each other by linking them or using docker-compose
That way you would not have to expose the ports to the "outside" world but keep it internal within the docker environment.
That way you would not have the “Port already in use”. The only problem is memory in that case. e.g. if 100 pushes are done to the git repo, you might run out of memory...
I'm setting up a CI/CD workflow for my organization but I'm missing the final piece of the puzzle. Surely this is a solved problem, or do I have to write my own?
The full picture.
I'm running a few EC2 instances on AWS, each running docker in its native swarm mode. A few services are running here which I've started manually via docker service create ....
When a developer commits source code a trigger is sent to jenkins to pull the new code and build a new docker image which is then pushed to my private registry.
All is well and good up to here, but how do I get the new image onto my docker hosts and the running container automatically updated to the new version?
Docker documentation states (here) that the registry can send events to configurable endpoints when a new image gets pushed onto it. This is what I want to automatically react to by having my docker hosts then pull the new image and stop, destroy and restart the service using that new version (with the same env flags, labels, etc etc), but I'm not finding any solution to this that fits my use case.
I've found v2tec/watchtower but it's not swarm-aware nor can it pull from a private registry at the time of writing this question.
Preferably I want a docker image I can deploy on my docker manager which listens to registry events (after pointing the registry config at it) and does the magic I need.
Cost is an issue, but time is less so, so I'm more inclined writing my own solution than I am adopting a fee-based service for this.
One option you have is to SSH to swarm master from Jenkins using SSH plugin and pull the new image and update the service when new image is pushed to the registry.
I'm very new to Amazon ECS, and I've written a task definition with 3 containers. One for my php application (main-server), second for node application (pubsub-server) and a redis container.
main-server and pubsub-server link to redis container. (Is this the best way to arrange the containers?)
The cluster runs well. However, I have an update to make in my main-server. I am able to push the updated image to Amazon ECR but my changes don't reflect on the cluster. Is there any additional step to perform to run the updated container on push?
I have tried deregistering the tasks and activating them back. But it doesn't seem to work.
Please let me know if I need to provide anymore details.
You need to force a new deployment.
From the AWS console, update the service definition and check the force new deployment checkbox on the first page, then skip to confirmation page.
From CLI:
aws ecs update-service --cluster [cluster arn] --service [service arn] --force-new-deployment