I have one question regarding Jenkins and connecting it to AWS instances.
I have a Jenkins server/ master node and I want it to connect to my AWS instances which are created & destroyed by an Auto Scale Group that they belong to.
I checked ec2 and ec2 fleet plugins and even though they do what I need, to connect to each (currently) existing instance and create a slave for it, they both intervene with the creation/ termination of the instances as well as with my Auto Scale Group settings.
I simply want Jenkins to connect to my instances, do some CI/CD stuff there and that's about it.
I don't want Jenkins to either create or terminate any of my instances.
I can do that with static instances, by using their ip's but I can't seem to find a way to do that with dynamically created instances.
Any ideas on how to work around that?
Thank you!
Related
I want to only ever run one instance of my container to run at a given time. For instance say my ECS container is writing data to an EFS. The application I am running is built in such a way that multiple instances can not be writing to this data at a given time. So is there a way to make sure that ECS never starts more than one instance. I was worried that when one container was being torn down or stood up that two containers may end up running simultaneously.
I was also thinking about falling back to EC2 so that I could meet this requirement but did want to try this out in ECS.
I have tried setting the desired instances to 1 but I am worried that this will not work.
Just set min, desired and maximum number of tasks to 1.
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-configure-auto-scaling.html#:~:text=To%20configure%20basic%20Service%20Auto%20Scaling%20parameters
I want to have a Jenkins job to monitor other Jenkins instances of performance.
so that I have details of all the instances at once place.
Any suggestions in that regard.
https://medium.com/#eng.mohamed.m.saeed/monitoring-jenkins-with-grafana-and-prometheus-a7e037cbb376
This would be a good solution for what you're trying to achieve.
You can have a dashboard for each instance inside Grafana
I have a deployment of jenkins in kubernetes with 2 replicas, exposed as a service under the nginx-ingress. After creating a project, the next refresh would yield no result for it, as if it was never created, the third refresh would show the created project again.
New to jenkins and kubernetes so not really sure what is happening.
Maybe each time the service is routing to different pods and so just one of the have the project created and other none. If this is the case how could i fix it??
PD: I reduce the replica to 1 and it work as intended but I am trying to make this a failure tolerant project.
To my knowledge Jenkins doesn't support HA by design. You can't scale it up just by adding more replicas. Here is simmilar question to yours on stack overflow.
Nginx is load balancing between two replicas of jenkins instances you created.
These two instances are not aware of each other and have separate data so you alternate between two totally separate jenkins instances.
One way you can try solving this is by setting session affinity on the ingress object:
nginx.ingress.kubernetes.io/affinity-mode: cookie
so in this way your browser session sticks to one pod.
Also remember to share $JENKINS_HOME dir between these pods e.g. using NFS volumes.
And let me know if you find this helpful.
We have Ruby on Rails application running on EC2 and enabled autoscaling feature. We have been using whenever to manage cron. So new instances with an image of the main instance created automatically on spikes and dropped when low traffic. But this also copies cron jobs as well to newly created instances.
We have a specific requirement where we want to limit cron to a single instance.
I found a gem which looks like handing this specific requirement but I am skeptical about it, reason being it is for elastic beanstalk and no longer maintained.
as a workaround, you can have a condition within the cron specifying that the cron job should execute based on a condition that would elect a single instance among your autoscaling group. e.g have only the oldest instance running the cron, or only the instance having the "lowest" instance ID, or whatever you like as a condition.
you can achieve such a thing by having your instances calling the AWS API.
As a more proper solution, you maybe could use a single cronified lambda accessing your instances? this is now possible as per this page
Best is to set scale in protection. It prevents your instance being terminated during scaling events.
You can find more information here on AWS https://aws.amazon.com/blogs/aws/new-instance-protection-for-auto-scaling/
I'm using Neo4j 1.9.M01 in a Spring MVC application that exposes some domain specific REST services (read, update). The web application is deployed three times into the same web container (Tomcat 6) and each "node" has it's own embedded Neo4j HA instance part of the same cluster.
the three Neo4j config:
#node 1
ha.server_id=1
ha.server=localhost:6361
ha.cluster_server=localhost:5001
ha.initial_hosts=localhost:5001,localhost:5002,localhost:5003
#node 2
ha.server_id=2
ha.server=localhost:6362
ha.cluster_server=localhost:5002
ha.initial_hosts=localhost:5001,localhost:5002,localhost:5003
#node 3
ha.server_id=3
ha.server=localhost:6363
ha.cluster_server=localhost:5003
ha.initial_hosts=localhost:5001,localhost:5002,localhost:5003
Problem: when performing an update on one of the nodes the change is replicated to only ONE other node and the third node stays in the old state corrupting the consistency of the cluster.
I'm using the milestone because it's not allowed to run anything outside of the web container so I cannot rely on the old ZooKeeper based coordination in pre-1.9 versions.
Do I miss some configuration here or can it be an issue with the new coordination mechanism introduced in 1.9?
This behaviour (replication only to ONE other instance) is the same default as in 1.8. This is controlled by:
ha.tx_push_factor=1
which is the default.
Slaves get updates from master in a couple of ways:
By configuring a higher push factor, for example:
ha.tx_push_factor=2
(on every instance rather, because the one in use is the one on the current master).
By configuring pull interval for slaves to fetch updates from its master, for example:
ha.pull_interval=1s
By manually pulling updates using the Java API
By issuing a write transaction from the slave
See further at http://docs.neo4j.org/chunked/milestone/ha-configuration.html
A first guess would be to set
ha.discovery.enabled = false
see http://docs.neo4j.org/chunked/milestone/ha-configuration.html#_different_methods_for_participating_in_a_cluster for an explanation.
For a full analysis could you please provide data/graph.db/messages.log from all three cluster members.
Side note: It should be possible to use 1.8 also for your requirements. You could also spawn zookeeper directly from tomcat, just mimic what bin/neo4j-coordinator does: run class org.apache.zookeeper.server.quorum.QuorumPeerMain in a seperate thread upon startup of the web application.