Running terragrunt run-all destroy in Jenkins needs user input - jenkins

I am currently using terragrunt to manage my terraform code, and am running into an error when trying to destroy my infrastructure. I want to be able to spin up development environments (or parts of environments) and delete them easily through Jenkins.
A certain part of my infrastructure is structured so that I need to use the terragrunt run-all command, which results in the following user message: WARNING: Are you sure you want to run `terragrunt destroy` in each folder of the stack described above? There is no undo! (y/n). Jenkins immediately fails after this output as it expects a y/n input.
For apply I have managed to go around this by saving a plan and then applying it, however for the destroy command I can't find another way. terraform commands have an -auto-approve option, but this seems to do nothing to the terragrunt run-all command, despite this being in the documentation:
Using run-all with apply or destroy silently adds the -auto-approve flag to the command line arguments passed to Terraform due to issues with shared stdin making individual approvals impossible.
Does anyone have any experience of this or any advice? Am I misunderstanding the documentation?

Just in case anyone else is looking for an answer to this, the flag --terragrunt-non-interactive can be used.

Related

My docker container keeps instantly closing when trying to run an image for bigcode-tools

I'm new to Docker, and I'm not sure how to quite deal with this situation.
So I'm trying to run a docker container in order to replicate some results from a research paper, specifically from here: https://github.com/danhper/bigcode-tools/blob/master/doc/tutorial.md
(image link: https://hub.docker.com/r/tuvistavie/bigcode-tools/).
I'm using a windows machine, and every time I try to run the docker image (via: docker run -p 80:80 tuvistavie/bigcode-tools), it instantly closes. I've tried running other images, such as the getting-started, but that image doesn't close instantly.
I've looked at some other potential workarounds, like using -dit, but since the instructions require setting an alias/doskey for a docker run command, using the alias and chaining it with other commands multiple times results in creating a queue for the docker container since the port is tied to the alias.
Like in the instructions from the GitHub link, I'm trying to set an alias/doskey to make api calls to pull data, but I am unable to get any data nor am I getting any errors when performing the calls on the command prompt.
Sorry for the long question, and thank you for your time!
Going in order of the instructions:
0. I can run this, it added the image to my Docker Desktop
1.
Since I'm using a windows machine, I had to use 'set' instead of 'export'
I'm not exactly sure what the $ is meant for in UNIX, and whether or not it has significant meaning, but from my understanding, the whole purpose is to create a directory named 'bigcode-workspace'
Instead of 'alias,' I needed to use doskey.
Since -dit prevented my image from instantly closing, I added that in as well, but I'm not 100% sure what it means. Running docker run (...) resulted in the docker image instantly closing.
When it came to using the doskey alias + another command, I've tried:
(doskey macro) (another command)
(doskey macro) ^& (another command)
(doskey macro) $T (another command)
This also seemed to be using github api call, so I also added a --token=(github_token), but that didn't change anything either
Because the later steps require expected data pulled from here, I am unable to progress any further.
Looks like this image is designed to be used as a command-line utility. So it should not be running continuously, but you run it via alias docker-bigcode for your tasks.
$BIGCODE_WORKSPACE is an environment variable expansion here. So on a Windows machine it's %BIGCODE_WORKSPACE%. You might want to set this variable in Settings->System->About->Advanced System Settings, because variables set with SET command will apply to the current command prompt session only. Or you can specify the path directly, without environment variable.
As for alias then I would just create a batch file with the following content:
docker run -p 6006:6006 -v %BIGCODE_WORKSPACE%:/bigcode-tools/workspace tuvistavie/bigcode-tools %*
This will run the specified command appending the batch file parameters at the end. You might need to add double quotes if BIGCODE_WORKSPACE path contains spaces.

Remove docker image if it exists

I have a debian package I am deploying that comes with a docker image. On upgrading the package, the prerm script stops and removes the docker image. As a fail safe, I have the preinst script do it as well to ensure the old image is removed before the installation of the new image. If there is no image, the following error reports to the screen: (for stop) No such image: <tag> and (for rmi): No such container: <tag>.
This really isn't a problem, as the errors are ignored by dpkg, but they are reported to the screen, and I get constant questions from the users is that error ok? Did the install fail? etc.
I cannot seem for find the correct set of docker commands to check if a container is running to stop it, and check to see if an image exists to remove it, so those errors are no longer generated. All I have is docker image tag to work with.
I think you could go one of two ways:
Knowing the image you could check whether there is any container based on that image. If yes, find out whether that container is running. If yes, stop it. If not running, remove the image. This would prevent error messages showing up but other messages regarding the container and image handling may be visible.
Redirect output of the docker commands in question, e.g. >/dev/null
you're not limited with docker-cli you know? you can always combine docker-cli commands with linux sh or dos commands as well and also you can write your own .sh scripts and if you don't want to see the errors you can either redirect them or store them to a file such as
to redirect: {operation} 2>/dev/null
to store : {operation} 2>>/var/log/xxx.log

Skipping parts of a Travis job if building from a fork

I've been writing some shell tasks that can't run unless there are secure environment variables present in a Travis CI PR build. There is an auth token that must be present to push some information up about the build, so for builds originating from forks, I'd like to simply skip these parts. They're not critical.
How can I tell if a build is originating from a fork?
From the documentation around "Environment Variables":
TRAVIS_SECURE_ENV_VARS: Whether or not secure environment vars are being used. This value is either "true" or "false".
This is a little ambiguous. Does it mean that secure environment variables are being used anywhere (as in, present in .travis.yml)? That they are being exported as environment variables in the current build? I'm not sure this is a good way to guarantee that I'm testing a pull request that has originated from a fork, but I didn't see any other way to do it.
My first attempted had code that looked something like
[ ${TRAVIS_SECURE_ENV_VARS} = "false" ] && exit 0; # more shell code here...
but this appears to continue ahead and push up without an auth token, failing the task (and the build). Further complicating matters is that should the command fail, the output may contain the auth token...so everything from stderr and stdout is redirected to /dev/null. Considering that builds don't start for several minutes, I'm stuck waiting in a long debug cycle.
My next attempt simply bypassed this built-in environment variable, in favor of instead trying to grab a secure environment variable directly.
[ ${ghToken} -n ] && exit 0;
This fails in the same way as above. I'm beginning to wonder if [ $COND ] && exit 0; really works the way I'm expecting it to in this context. It seems to work just fine when I run equivalent shell scripts locally (Mac OSX and bash).
Does Travis offer a built-in way to determine if the pull request being built originated from the original repository versus a fork?
Here is my current work around.
screenshotsClone: {
command: ['[ ${ghToken} ] &&',
'git submodule add -f', screenshotPullTemplate, 'screenshots > /dev/null 2>&1;'].join(' '),
options: {
stdout: false,
failOnError: false
}
}
I'd rather not silently pass on errors should there be a legitimate problem with the shell task. At this point I might as well remove the environment variable check that precedes it.
Stop using grunt-shell for things in your .travis.yml. Put those shell tasks into their own files so that you can stop using /bin/sh and start using /bin/bash, which is what you've been testing against locally.
Something like this will fix your problems.
Don't forget to mark the shebang as #! /bin/bash, which is crucial for the kinds of checks that should be happening on Travis.
I think what you want is to check if one of your secure environment variables is null to detect that you are running a build from a fork and in that case stop the build script prematurely.
Hence I would suggest you use the -z comparison operator in BASH to detect null strings because the -n operator detects non null string. A non null secure environment variable would mean that you are not running the build from a fork.
My suggestion is to change your line for:
[ -z "${ghToken}" ] && exit 0;
Hope this helps.

Protect Jenkins build from clean up via api

I want to protect some Jenkins builds from the auto cleanup. I have found the http://ci.jenkins.com/job/[job_name]/[build_v]/toggleLogKeep however this requires me to check the state. Are there any other end points I can use. Ideally it would be /keepBuildForever /dontKeepBuildForever
It looks like there is no good solution. The best way is to list all keep forever builds. Then check if it is already in the list, if not hit the /toggleLogKeep endpoint
buildsXml = http://ci.jenkins.com/api/xml?depth=2&xpath=/hudson/job/build[keepLog=%22true%22]/url&wrapper=forever
#check if your build is in buildsXml
After accidently deleting an important build, I found this alternative solution:
# if running inside a job, the following vars are already populated:
#JOB_NAME=yourjobname
#BUILD_NUMBER=123 #your build number
#JENKINS_HOST=192.168.1.11
#JENKINS_PORT=8080
#JENKINS_URL=http://${JENKINS_HOST}:${JENKINS_PORT}
wget --no-check-certificate "${JENKINS_URL}/jnlpJars/jenkins-cli.jar"
java -jar jenkins-cli.jar -s "$JENKINS_URL" keep-build "$JOB_NAME" "$BUILD_NUMBER"

Ruby on Rails- how to run a bash script as root?

What I'm wanting to do is use 'button_to' & friends to start different scripts on a linux server. Not all scripts will need to be root, but some will, since they'll be running "apt-get dist-upgrade" and such.
The PassengerDefaultUser is set to www-data in apache2.conf
I have already tried running scripts from the controller that do little things like writing to text files, etc, just so that I know that I am having Rails execute the script correctly. (in other words, I know how to run a script from the controller) But I cannot figure out how to run a script that requires root access. Can anyone give me a lead?
A note on security: Thanks for all the warnings against hacking that were given. You don't need to loose any sleep, though, because A) the webapp is not accessible from the public internet, it will only be on private intranets, B) the app is password protected, and C) because the user will not be able to supply custom input, only make selections from a form that will be passed as variables to the script. However, because I say this does not mean that I am disregarding your recommendations for security- I will be considering them very carefully in my design.
You should be using the setuid bit to achieve the same functionality without sudo. But you shouldn't be running Bash scripts. Setuid is much more secure than sudo. Using either sudo or setuid, you are running a program as root. But only setuid (with the help of certain languages) offers some added security measures.
Essentially you'll be using scripts that are temporarily allowed to run as a the owner, instead of the user that invoked them. Ruby and Perl can detect when a script is run as a different user than the caller and enforces security measures to protect against unsafe calls. This is called Taint mode. Bash does not run in taint mode at all.
Taint mode essentially works by declaring all input from an outside source unsafe for use when passed to a system call.
Setting it up:
Use chmod to set permissions on the script you want to run as 4755 and set it's owner to root:
$ chmod 4755 script.rb
$ chown root script.rb
Then just run the script as you normally would. The setuid bit kicks in and runs the script as if it was run by root. This is the safest way to temporarily elevate privileges.
See Ruby's documentation on safe levels and taint to understand Ruby's sanitation requirements to protect against tainted input causing harm. Or the perlsec faq to learn the how the same thing is done in Perl.
Again. If you're dead set on running scripts as root from an automated system. Do Not Use Bash! Use Ruby or Perl instead of Bash. Taint mode forces you to take security seriously and can avoid many unnecessary problems down the line.

Resources