Currently we have some servers on AWS that are scheduled to come down and up. These are done through some python scripts.
I am looking for a way to deploy code onto these servers in a way that even if some are stopped, they can be "auto-deployed"/"updated" when they come back up. At the moment we have to run some scripts to bring those stopped servers up and then deploy but this can get tiring and hoping to improve on it.
We use Jenkins for our deploy pipeline and I was experimenting around with codeDeploy to see if that is possible but no luck so far.
We don't want to go with auto-scaling group for the time being and also are trying to move away from scheduling some scripts on each of those stopped servers that can update the code.
Any help will be greatly appreciated!
Thanks
Related
I am wondering if someone has experience running jenkins on ec2,
I am currently running t3.medium and able only to run 10 executors after that I get timeouts. The jobs that is being run is getting a existing ami, unpack it put stuff in it pack it with terraform finish the process where the instance that is being raised im m5.large.
I am going for getting bigger machine from the Jenkins itself.
So I was looking at some AMD machines and my point is to be able to run like 50-100 executors. Can you recommend something for the point.
I was looking in to m6a.xlarge if it will do it or if something smaller can do it would be good enough?
I am running docker containers on a Raspberry pi. I miss the last puzzle of my CI/CD project that is to automate the flow (pull the newest docker images from my repo and deploy it) and was wondering if someone else had figured this process out? Is there an existing script or a polling-service/ listener that waits for changes in my docker repo? I have tried to search around for solutions or hints on how to get this to work but my attempts have so far been unsuccessful.
Every hint/tips/links is very much appreciated.
I know its been 4 months, but this might be what you're looking for: https://fluxcd.io/docs/guides/image-update/
It will check the hub at specified intervals and retrieve the latest tags, and you can also set it up to automatically update your deployments.
Good luck :)
I've been running Jenkins in container for about 6 months, only one controller/master and no additional nodes, because its not needed in my case, I think. It works OK. However I find it to be a hassle to make changes to it, not because I'm afraid it will crash, but because it takes a long time to build the image (15+ min), installing SDK's etc. (1.3G).
My question is what is the state of the art running Jenkins? Would it be better to move Jenkins to a dedicated server (VM) with a webserver (reverse proxy)?
what-are-the-advantages-of-running-jenkins-in-a-docker-container
Is 15 mins a long time because you make a change, build, find out something is wrong and need to make another change?
I would look at how you are building the container and get all the SDKs installed in the early layers so that rebuilding the container can use those layers from cache and move your layers that change to as late as possible so as little of the container needs rebuilding.
It is then worth looking at image layer caches if you clean your build environment regularly (I use Artifactory)
Generally, I would advocate that as little building is done on the Controller and this is shipped out to agents which are capable of running Docker.
This way you don't need to install loads of SDKs and change your Controller that often etc as you do all that in containers as and when you need them.
I use the EC2 cloud plugin to dynamically spin up agents as and when they are needed. But you could have a static pool of agents if you are not working in a cloud provider.
I´m thinking about the following high availability solution for my enviroment:
Datacenter with one powered on Jenkins master node.
Datacenter for desasters with one off Jenkins master node.
Datacenter one is always powered on, the second is only for disasters. My idea is install the two jenkins using the same ip but with a shared NFS. If the first has fallen, the second starts with the same ip and I still having my service successfully
My question is, can this solution work?.
Thanks all by the hekp ;)
I don't see any challenges as such why it should not work. But you still got to monitor in case of switch-over because I have faced the situation where jobs that were running when jenkins abruptly shuts down were still in the queue when service was recovered but they never completed afterwards, I had to manually delete the build using script console.
Over the jenkins forum a lot of people have reported such bugs, most of them seems to have fixed, but still there are cases where this might happen, and it is because every time jenkins is restarted/started the configuration is reloaded from the disk. So there is inconsistency at times because of in memory config that were there earlier and reloaded config.
So in your case, it might happen that your executor thread would still be blocked when service is recovered. Thus you got to make sure that everything is running fine after recovery.
We are currently working on Dockerizing our Ruby on Rails application, which also includes Delayed Job. A question buzzing within our development team is whether and/or how to Dockerize the Delayed Job component separately from the application.
This would allow Delayed Job to start up new containers when necessary for high traffic within the jobs queue. In addition, since Delayed Job actually starts up the Rails application each time when first booting up, we thought the following benefits would follow:
The Delayed Job container might start up quicker
Application code would start up regardless of the Delayed Job container startup time
So I know a guy responsible for a rails app that uses delayed jobs. When it came time to dockerize said app, it got a container for each. Both containers are using the same codebase but one runs the frontend and one the jobs. It's not devops microservice-eriffic but it works.
Outside of the logical separation between the two, docker containers are supposed to only have a single process running inside. Could've hacked it together but it seemed wrong to break a docker fundamental out of the gate.