I have a Jenkins instance that has users on different projects. I also have a Bitbucket instance where the users store their code. Push/pull access to the code repositories is authenticated by SSH public/private keys. Not all users in Bitbucket have access to all repositories.
I want users to be able to configure their jobs to use their own private keys to pull source from Bitbucket, but users should not be able to access other users' private keys. Is this possible with combination of the Jenkins Credentials plugin and the Git plugin? If so, how do I configure this?
If this is not possible with the Jenkins Credentials plugin/Git plugin, how can I implement this in Jenkins? I figure enterprises that use Jenkins must have resolved this problem but I can't find a solution anywhere.
Jenkins has a lot of inherent security issues in this respect and allowing users to have configuration permissions in a multi-tenant environment is very difficult to lock down.
Even if you were able to set permissions on a per credentials basis, a user that has configuration permissions to setup their own freestyle job can easily run processes on the nodes that could scrape passwords from the environment of other tenants. This can even be done with background processes if you limit one executor slot per node.
From what I've seen in industry and leveraged myself for an enterprise scale there are two high-level recommendations I would suggest:
Breakup the single instance into multiple instances so each set of
users or teams can have their own instance to work with.
Evaluate what users' functional needs are and provide a capability to request jobs where configuration permissions do not need to be given to users, rather only build/read permissions.
For Item 1:
Breaking up the instances not only helps Jenkins management from a security perspective, but from a scaling perspective as well since there are several issues you can run into with Jenkins once reaching a certain size (e.g. users that are resource hogs, unstashing bottlenecks, archiving bottlenecks, poorly written pipelines, etc.). These scaling problems typically lead to a need of vertically scaling the Jenkins master.
However, this approach has its own set of issues to solve since you now have multiple instances to maintain, but that is typically a bit easier to manage and there are some off-the-shelf solutions available if you're willing to pay the price (e.g. Cloudbees CI). Managing multiple instances can be solved in-house as well if you're willing to write some scripts or setup a service to handle this. Personally I'm a bit more of a fan of the in-house solution than the paid solution since I lean towards the ability to control one's own destiny and off-the-shelf solutions aren't always the one-size-fits-all they claim to be.
For Item 2:
If you really want to keep a single instance, the best way to secure it is to not let users have configuration permissions. As mentioned above, Jenkins has a lot of inherent security issues that does not make it well-suited for users to configure jobs in a multi-tenant setting. By evaluating the users' needs, you often find that there is a lot of common requirements users have that could be provided from common job templates without having to give them permissions.
Leveraging the Job DSL plugin to parameterize job creation is one way to do this. Parameters could then either be provided through a custom service or configuration files that are committed to a git repo. Another approach is leverage Jenkins REST API directly with a custom service that posts new job configurations from a customs job templates.
However, this approach could still run into scaling problem in the long term if the utilization of the Jenkins instance is expected to increase. These scaling problem are not insurmountable and can be mitigated with vertical scaling or offloading some stashing/archiving activity, but eventually at a certain point it might make sense to re-evaluate going with Item 1, or even a combination of Item 1 with Item 2.
Conclusion:
I know this is not likely the answer you were hoping for, but if security is a major concern, then a multi-tenant Jenkins instance that allow users' configuration permissions is not they way to go.
Related
I am migrating a micro-service system to Helm. The system has roughly 30 distinct deployments depending on an installation context. We are using Helm 3. Our currently layout is a three tier chart/subcharts organised by functionality that may or may not be required in a given context. The subcharts, when grouped in a 2nd-level subchart, usually need to be enabled/disabled together; so this is easy by disabling their parent in the top-level values file. However, there are some scenarios where grand-child charts depend on an uncle chart and I'm having difficulty finding an elegant solution to these situations.
What are strategies that have been used successfully in other charts?
Two scenarios that currently fall into this category for me are:
I would like to have a "feature flag" (global) that allows the installer to decide if a PVC should be create and mounted on applicable pods so that they can log into a central place for retrieval later (ELK, I know, I know...). If the flag is set then the PVC needs creation and the deployments will mount it. If not, then no PVC should be created and an empty dir used.
Some of the deployments use a technical "account" to communicate with each-other. So when these services are enabled, I'd like to create a secret with the username/password and run a Job to create the user in our identity provider. That same secret would then be added to the applicable deployments' environment variables. There are a handful of these technical accounts that are reused my multiple deployments. As such, I'd like to only create their secret and run the user creation job once.
Many thanks for any hints or helpful experience that you can send my way.
I am deploying a web app onto multiple servers. I would like to push code to one of these servers so that it can be updated but not affect the other deployed servers. The web app and server is unique to each client so if I add a feature for one client I would not like to add it to all the other servers that are deployed. There will be occasions where an update might be required for all the active servers.
I am hoping to automate this functionality so that I can push code for one server without it affecting the other servers. I have been looking into using Docker, Ansible, and Kubernetes for this job but am fairly new to deployment in general and would like to get an idea of the best practices for something like this.
Thank you.
It is now deployed as a stand alone web server where a push to the master branch causes an automatic deployment for that one server. The only solution I've come up with is deploying multiple servers individually.
Your question is not strictly related to Docker/Kubernetes but I will try to answer anyway. There are two ways this could be done
1) Maintain a repo for each customer - which has the customer's customization and pulls the common code from somewhere else. Maybe a Git submodule or some other way. This way each customer can have their own lifecycle. I don't like this approach because the drift between customer tends to get bigger. Also, it is like maintaining as many codebases as there are customers - which is not scalable
2) The only way I can think of this can be done in a way which is uniform for all customers is to do feature flags. This has to be during development and is not a purely operational exercise. This is difficult to get started but in longer-term is the only way I think this can be scaled. Specific features for the one customer then can be turned off by configuration. There are also some companies like LaunchDarkly which enable this
We have around 30 Jenkins installs across our organization, both Windows and Linux. They are all used for different tasks and by different teams (e.g. managing Azure, manipulating data, testing applications etc.)
I have been tasked with looking at whether we could bring these all into one 'Jenkins Farm' but as far as I can see such a thing doesn't exist? Ultimately 'we' want some control and to minimize the footprint of Jenkins. The articles I have found don't recommend using a single Master server (with multiple nodes) because of the following:
No role-based access for projects (affecting other teams code)
Plugins can affect all projects
Single point of failure as there is only one master server
Is it best to leave these on separate servers? Are there any other options?
I believe Role based access for projects is possible using
https://wiki.jenkins.io/display/JENKINS/Role+Strategy+Plugin
However, a single master isn't ideal as you pointed out due to 'Plugins can affect all projects'. Probably best to have separate jenkins master nodes but configure agents such that they can be shared across teams/projects.
What are the best practices for a jenkins installation like the one below?
I do have a quite small dedicated server with 16 gb of ram and 2tb of diskspace with enabled hardware virtualization, and one use of it would be to host my own projects (opensource), and there are applications set up such as git repository manager and stuff.
I would like to set up jenkins there for automatic building, but I want to make it secure.
This installation is small enough to require only a master node, but I am planning to disable building on master completely, and to run a virtual machine as an agent, for the reason that it would be isolated as much as possible on the same physical server, so that a job would be unable to destroy jenkins master data.
Should I go for master only anyway? or, if using a virtual machine agent, should I have only one executor there or multiple ones? I probably cannot isolate multiple parallel jobs running on one agent without using one agent per job, but maybe I am overthinking all this. Using one agent per job, at least in case of virtual machines, would exhaust server resources very quickly, or alternatively, money.
You can use Jenkins own database of users, which I have used in commercial settings and it has worked perfectly well. If you have Active Directory you can also integrate with this if you want to go to extra effort so people only have to remember one login.
Once users are logged in you should provide authorisation via the Role-Strategy plugin
I am not a developer, but reading about CI/CD at the moment. Now I am wondering about good practices for automated code deployment. I read a lot about the deployment of code to a pre-existing environment so far.
My question now is whether it is also good-practice to use e.g. a Jenkins workflow to deploy an environment from scratch when a new build is created. For example for testing of a newly created build, deleting the environment again after testing.
I know that there are various plugins to interact with AWS, Azure etc. that could be used to develop a job for deployment of a virtual machine.
There are also plugins to trigger Puppet to deploy infra (as code) and there are plugins to invoke an infrastructure orchestration.
So everything is available to be able to deploy the infrastructure and middleware before deploying code (with some extra effort of course).
Is this something that is used in real life? How is it done?
The background of my question is my interest in full automation of development with as few clicks as possible, and cost saving in a pay-per-use model by not having idle machines.
My question now is whether it is also good-practice to use e.g. a Jenkins workflow to deploy an environment from scratch when a new build is created
Yes it is good practice to deploy an environment from scratch. Like you say, Jenkins and Jenkins pipelines can certainly help with kicking off and orchestrating that process depending on your specific requirements. Deploying a full environment from scratch is one of the hardest things to automate, and if that is automated, it implies that a lot of other things are also automated, such as infrastructure, application deployments, application configuration, and so on.
Is this something that is used in real life?
Yes, definitely. A lot of shops do this. The simpler your environments, the easier it is, and therefore, a startup with one backend app would have relatively little trouble achieving this valhalla state. But even the creation of the most complex environments--with hundreds of interdependent applications--can be fully automated; it just takes more time and effort.
The background of my question is my interest in full automation of development with as less clicks as possible and cost saving in a pay-per-use model by not having idling machines.
Yes, definitely. The "spin up and destroy" strategy benefits all hosting models (since, after full automation, no one ever has to wait for someone to manually provision an environment), but those using public clouds see even larger benefits in terms of cost (vs always leaving AWS environments running, for example).
I appreciate your thoughts.
Not a problem. I will advise that this question doesn't fit stackoverflow's question and answer sweet spot super well, since it is quite general. In the future, I would recommend chatting with your developers, finding folks who are excited about this sort of thing, and formulating more specific questions when you all get stuck in the weeds on something. Welcome to stackoverflow!
All is being used in various combinations; the objective is to deliver continuous value to end user. My two cents:
Build & Release
It depends on what you are using. I personally recommend to use what is available with the tool. For example, VSTS (Visual Studio Team Services) offers complete CI/CD pipeline. But if you have a unique need which can only be served by Jenkins then you must use that and VSTS offers that out of the box.
IAC (Infrastructure as code)
In addition to Puppet etc. You can take benefits of AZURE ARM (Azure Resource Manager) Template to Build and destroy an environment. Again, see what is available out of the box with the tool set you have.
Pay-per-use
What I have personally used is Azure Dev/Test Labs and have the code deployed to that via CI/CD pipeline. Later setup Shutdown policy on the VM so it will auto-start and auto-shutdown based on time provided. This is a great feature to let you save cost on the resources being used and replicate environments.
For example, UAT environment might not needed until QA is signed off. But using IAC you can quickly spin up the environment automatically and then have one-click deployment setup to deploy code to UAT.