What does github actions environment variable "CI" stands for? - environment-variables

I tried to find meaning of CI inside Github Actions documentation, but I can not find detailed explanation.
Except this
CI Always set to true.

it's the environment variable defined by convention in many CI servers like github action, Gitlab, Travis etc. Its value is always set to true when builds run in those servers.
Many tools check whether that variable is set to true and adjust their configuration and output to be suitable for CI environment. E.g. some tools are interactive by default, but if CI=true, it will skip user prompt and use default values/passed in arguments. Another example is pipenv that has special configuration when CI=true: https://github.com/pypa/pipenv/blob/207f2f565d2c4493bd8cdfb55a9b565aa58984fc/pipenv/environments.py#L27

Related

Passing arguments to Docker build while deploying AppEngine flex

I'm wondering if it's possible to feed arguments, or environment variables into a Dockerfile used by AppEngine (flex environment).
I'd like to use this command:
COPY ${STAGE}/keycloak-files/realm-config/* /opt/jboss/keycloak/realm-config/
"STAGE" variable would allow to select the origin (I have a "staging" and "production" directory, containing different configurations).
I've got two different app.yml files, one for each environment, but from what I read online, environment variables are not exposed to the Dockerfile at build time.
People suggest to pass arguments to accomplish the task. But how would that be possible with appengine, where we don't execute the docker build command directly?
As #DamPlz said there is not a straight way to pass env variables from the app.yaml to the Dockerfile during the deployment phase . Here are some workarounds that I could think of:
One option could be to create the variable in the Dockerfile directly and if you want to change it each time at runtime you can use a placeholder value and have a script update the value of the variable before running “gcloud app deploy”.
On the other hand you could use build triggers in Google Cloud Registry to modify it in the Docker image using user-defined substitutions.

Access Cargo features *inside* the build script

How is it possible to access which features the package is being built with, inside the build.rs script? There is an incredibly expensive step in the script which is only needed for a particular cargo feature, but I can't see any way to access config features inside the build script.
Is there any way to read whether or not a given feature is enabled in the build.rs script?
I haven't been able to find documentation here, but was able to figure out one solution by guessing.
Cargo features are available as build features not just in the main source files, but inside the build.rs script as well. So you can use any of the standard ways to check configuration, like the cfg! and #[cfg(feature = "...")] macros, as mentioned in https://doc.rust-lang.org/reference/conditional-compilation.html and How do I use conditional compilation with `cfg` and Cargo?
Cargo sets a number of environment variables when the build scripts are run:
https://doc.rust-lang.org/cargo/reference/environment-variables.html#environment-variables-cargo-sets-for-build-scripts
Including an environment variable for each feature:
CARGO_FEATURE_<name> — For each activated feature of the package being built, this environment variable will be present where <name> is the name of the feature uppercased and having - translated to _.

Jenkins environment variables

I am a novice to Jenkins administration, trying to help a friend set up Jenkins for his startup.
I know there are plugins in Jenkins but not quite familiar on how to achieve his requirements.
He has a spring application which uses dispatcher.xml to reference jdbc.properties (one for each environment eg: jdbc_dev, jdbc_test, jdbc_prod.
How do I setup environment variables in Jenkins so the respective jdbc.properties is picked when the war file is built ?
I think following plugins will help your friend a lot.
1) EnvInject Plugin:- To inject environment variables depending on the requirements.
2) Role strategy :- To manage authorization and permissions of various users. For eg: Admin,Tester,Developer,guest.
For more info on configuration on this plugin read this Answer
3) JobConfigHistory Plugin:- To revert changes if you screw anything.
Hope it helps.
See Parameterized Build:
Sometimes, it is useful/necessary to have your builds take several "parameters".
...
The parameters are available as environment parameters variables. So e.g. a shell ($FOO, %FOO%) or Ant (${env.FOO}) can access these values.
[Corrections by me.]
See also the Matrix Project Plugin:
A multi-configuration project is useful for instances where your builds will make many similar build steps, and you would otherwise be duplicating steps.

Relevance of specifying Tool Locations in Jenkins Node Configuration

What does specifying tools under "Tool Locations" in Jenkins Node configuration, exactly do?
I wanted to run a mvn command in free style project. I specified the Maven tool and its home under "Tool Locations" and that didn't help. The job failed for not being able to find mvn. I followed this solution and modified the PATH variable to include maven path. That did the trick.
So how exactly does specifying Tool Locations help? Is it just to help users see the tool paths when they visit Jenkins node configuration site?
In Global Configuration, you can configure different tools, such as Maven, JDK, Ant, etc. When you do that, you also provide an installation method (usually an automatic online installer).
Now, on the Job Configuration, when you configure a buildstep using one of the tools, for example Invoke Ant, you will see a drop down. The first option is Default, the next is whatever tool installations that you've configured in Global Configuration.
If you choose Default it will use whatever is already installed on the system by simply executing a command like java, ant, or mvn. Obviously, if those commands are not in the path, it will fail.
If you choose one of the configured options, Jenkins will install it to the system (if not already done), and use that particular version (as opposed to the one you have under path).
This installation will happen on both the master and the slave nodes. If you do not want to use this automatic installation on a particular node, you can manually specify the location of the particular tool in the screenshot you provided.
The above is only useful if in your Job Configuration, you are selecting a specific version of the tool, other than Default, because as already explained: Default will use whatever is already installed on the system (not by Jenkins).
To answer your queries:
So how exactly does specifying Tool Locations help?
Well, not really i would say. It's meant to specify your custom installation directory path for any program that's already setup in your global config page in a different location. If you have configured mvn 2.0 in global settings but wish to use mvn 3.0 on this specific node, then you can use Tool Locations. However, it will work only if the path to the executable is already present in the PATH env variable of the user with which the slave process is running.
Is it just to help users see the tool paths when they visit Jenkins
node configuration site?
Well, yes, you can say that. I am saying so because whenever you specify a custom path, it generally isn't in the PATH of the user with which the process is running. That's true in most of the cases because such binaries don't run independently. They need other dependent modules to work properly otherwise simply dropping a binary in one of the directories that's already present in the PATH would have sufficed. So, it's safe to say that you explicitly have to add the path to executable in PATH to make it work.
If you have noticed the Help (?) section in the Tool Locations, even that doesn't sound very excited about using this feature :P It says:
You can specify the location of certain tools on this node, overriding the global configuration. (You may prefer to use automatic tool installers instead, removing the need to configure each node separately.)

JENKINS_HOME environment variable used for 2 conflicting purposes

It appears that Jenkins is using the environment variable $JENKINS_HOME for 2 different purposes, and for each purpose it will get a different value.
Purpose#1: First, there is the JENKINS_HOME that is a directory on the local file system that stores files that Jenkins creates. Jenkins uses this directory for disk space to perform builds and keep archive. So a sample value might be:
export JENKINS_HOME=/var/jenkins
That purpose is described here:
https://wiki.jenkins-ci.org/display/JENKINS/Tomcat
https://wiki.jenkins-ci.org/display/JENKINS/Administering+Jenkins
Purpose#2:
There is another instance where Jenkins used the JENKINS_HOME environment variable, and that is for monitoring external jobs. But this time JENKINS_HOME is a URL, like such:
export JENKINS_HOME=http://user:pw#myserver.acme.org/path/to/jenkins/
That purpose is described here:
https://wiki.jenkins-ci.org/display/JENKINS/Monitoring+external+jobs
So it seems odd that Jenkins would use the same environment variable, yet its value will change depending on the purpose. I would think that the external job would use another name for the environment variable, like JENKINS_URL. I suppose as a workaround I can just set the environment variable in the Servlet container (Tomcat for me) instead of on the operating system, so there is no conflict. Still though, the fact that this conflict for the variable exists in the first place seems strange. Is there something I'm missing?
That is pretty confusing, but the second purpose is for monitoring Jenkins jobs in an external process, not within Jenkins itself; so it's not Jenkins that is using the $JENKINS_HOME value in this case and there is no conflict. They could have picked a better name for the variable, though.
In most other cases, the Jenkins master URL is referred to as JENKINS_URL - see the Jenkins CLI documentation for example.

Resources