I have jenkins installed on a linux machine within tomcat behind Apache.
As a consequence jenkins runs as tomcat. The user tomcat is not configured properly to run the job.
How do I tell Jenkins to run a job as a different user on the same machine ?
Does it make sense to define a slave on the same machine but with a different user ?
You can use su or sudo to run as a different user inside a build step, but that comes with some security implications. If this is something you want to do regularly I would recommend that you define a slave on the same machine with the other user, as you have suggested. I am not aware of any plugins/extensions that would make this easier for you unfortunately.
I sometimes would ssh into the account and instead of running build.sh run ssh user#localhost build.sh
Related
I have an Ansible playbook which I use to provision an server. This provisioning works fine. The server is up and running.
Now I want to test this playbook regularly and in an automated way. My repo is hosted on Github so I want to use Github Actions for this CI build.
In my mind I start a docker container, run my playbook against this container and shut the container down again. But I don't really know how to start. There must be a better way than a "shell script" inside the pipeline. I don't want to run all commands on my own. But even if I need to do that I don't get a grip on how to do this.
So basically my questions are:
How can I test Ansible playbooks and actually run this playbook in a disposable environment? This way I could test the actual installations.
How can I implement this in a Github Actions pipeline?
Maybe Docker is not the best idea to begin with because the Ubuntu image is stripped down (compared to an actual installation on a physical machine). Maybe Vagrant is the better idea? But I have even less ideas how to tackle Vagrant.
Jenkins cluster in my company runs builds as root user.
How to configure cluster/build to run as a different user? Without root privileges ?
Builds always run under the user that runs the node agent process. So your options are
Specify a different user for connecting the node, or
Switch to a different user during the build (e.g., via sudo in a shell build step). This is more flexible, but plugin related-code (like SCM checkout) will still run under the root account.
Any agent can be configured to be launched as any user, so do that.
Advise your company Jenkins Admin to change Jenkins immediately to NOT run as root. It does not need root (can be a daemon/service tho) and increases your risk exposure . We use Java Service Wrapper (RUN_AS_USER=jenkins) in Unix. The new windows installer prompts you for the account to use (don't use System despite being the default).
Hey I am currently learn Jenkins pipeline for CI and CD
I was successfully deploy my express js by Jenkins
On locally machine my server
It was for server and my ENV was show off on my public repository
I am here trying to understand more how to hide that ENV on my Jenkins? That use variable
And is that possible to use variable on Dockerfile also to hide my ENV ?
On my Jenkins Pipeline
I run my ENV on docker run -p -e myEnV=key
I do love to hide my ENV so people didn't know my keys inside on my Jenkinsfile and Dockerfile
I am using multi branches in jenkins because I follow the article on hackernoon for deploy react and node js app with Jenkins
And anyway, what advantages to push our container or image to Docker Hub?
If we push it to there and if we want to move our server to another server
We just need to pull our repo Docker Hub to use that to new server because what we have been build everytime it push to our repo Docker Hub , right ?
For your first question, you should use EnvInject Plugin. or If you are running Docker from the pipeline, then set Environment variable in Jenkins, then access these environment variables in Docker run command.
in the pipeline, you can access environment variable like this
${env.DEVOPS_KEY}
So your Docker run command will be
docker run -p -e myEnV=${env.DEVOPS_KEY}
But make sure you have set DEVOPS_KEY in the Jenkins server.
Using EnvInject it pretty much simple.
You can also inject from the file.
For your Second Question, Yes just the pull the image from docker-hub and use it.
Anyone from your Team can pull and run so only the Jenkins server will build and push the image. So it will save time for others and the image will be up to date and will also available remotely.
Never push or keep sensitive data in your Docker image.
Using Docker Hub or any kind of registry like Sonatype Nexus, Registry, JFrog Artifactory helps you to keep your images with their tags and share it with anyone. It also means that the images are safe there. If your local environment goes down, the images will stay there. It also helps for version control. If you are using multibranch pipelines, that means that you probably will generate different images with different tags.
Running Jenkins, working the jobs, doing the deployment is not a good practice. In my experience from previous work, the best exaples are: The server starts being bloated after some time, Jenkins doesn't work the most important times that you need it, The application you have deployed does not work because Jenkins has too many jobs that takes all the resources.
Currently, I am running different servers for Jenkins Master and Slave. Master instance does not run any jobs, only the master instances do. This keeps Jenkins alive all the time. If slaves goes down, you can simply set another slave.
For deployment, I am using Ansible which can simultaneously deploy the same docker image to multiple servers. It is easy to use and in my opinion quite safe as well.
For the sensitive data such as keys, password, api keys, you are right about using -e flag. You can also use --env-file. This way, you can keep it outside of docker image and keep the file. For passwords, I prefer to have a shell script that generates the passwords in environment files.
If you are planning to use the environment as it is, you can keep the value that you are going to set as environment variable inside Jenkins safely. then you can get that value as a variable. You can see it in Jenkins website
I've installed Jenkins from repository for Red Hat 7.
sudo rpm --import https://pkg.jenkins.io/redhat-stable/jenkins.io.key
yum install jenkins
And got a very simple question - after install we have system user jenkins:
jenkins:x:956:967:Jenkins Automation Server:/var/lib/jenkins:/bin/false
What is general purpose of this user? (i know that it's running Jenkins Instance) That's all what it for?
Never thought about.. and no any search answer. :)
If I'm changing it to login user and changing password etc., What it can affect? Any future behavior/issues of the jenkins server?
Thank you.
Jenkins is basically a service/daemon i.e similar to lots of other services which doesn't actually needs a shell to run like apache sendmail etc.
Try comparing it with other services by doing cat /etc/passwd on your system.
Jenkins uses the user jenkins to manage permissions & segregation between services like other services do.
Jenkins by default uses sh shell to execute your shell commands defined in your build jobs.
In case you really want to login using jenkins user & perform operations, give it a shell by running below command -
usermod -s /bin/bash jenkins
Travis CI has a really nice feature, builds are run within VirtualBox VMs. Each time a build is started, the box is refreshed from a snapshot and the code copied on to it. Any problems with the build cannot affect the host, and you can use any OS to run your builds on.
This would be really good, for example, compiling and testing code on a guest OS that matches your production env. Also, you can keep your host free of any installation dependencies you might need (e.g. a database server) and run ITs without worrying about things like port conflicts.
Does such a thing exist for Jenkins?
Check out the Vagrant Plugin https://wiki.jenkins-ci.org/display/JENKINS/Vagrant-plugin
This plugin allows booting of Vagrant virtual machines, provisioning them and also executing scripts inside of them
You can run Jenkins in a Master Slave Setup. Your Master instance manages all the jobs but lets all the slaves do the actual work. These Slaves can be VMs or physical machines. Go To Manage Jenkins -> Manage Nodes -> New Node to add Nodes to your Jenkins Setup.
There is the vSphere Cloud Plugin and the Scripted Cloud Plugin that can be used for this purpose.