I have a user defined build setting at the project level:
Which is fed into the Other Swift flags build setting also at the project level:
This causes the compiler to emit a warning when any function takes longer than 1000 ms to compile. The problem is that CI is much slower than our dev machines, so on our dev machines we would like it to be more like 300 ms, and on CI 2000+ ms, or maybe even just disabled. We run with warnings as errors, so if the CI which is shared machines goes slow this often causes the build to fail.
When running on CI there is a CI environment variable that is set, so I'd like to change the value of the LONG_SWIFT_COMPILE_LIMIT_MS build setting if CI is set or not. How can I do this? I tried adding:
if [ -z ${CI+x} ]; then
# Not running on CI
setenv LONG_SWIFT_COMPILE_LIMIT_MS 300
else
# Running on CI
setenv LONG_SWIFT_COMPILE_LIMIT_MS 2000
fi
as a prebuild script on the scheme, but that didn't work.
You can specify build setting overrides on the xcodebuild command line by adding <setting>=<value> arguments. For your case, LONG_SWIFT_COMPILE_LIMIT_MS=2000.
This can't be done by an Xcode scheme pre-build script.
Related
I observe that my Bazel build agent frequently builds the project from scratch (including compiling grpc, which keeps unchanged) instead of taking results from cache. Is there a way, like query or cquery (pardon my ignorance) to determine why is the cache considered invalid for particular target? Or any techniques to tackle cache invalidation problem?
This is How the bazel build works :
When running a build or a test, Bazel does the following: Loads the BUILD files relevant to the target. Analyzes the inputs and their dependencies, applies the specified build rules, and produces an action graph. Executes the build actions on the inputs until the final build outputs are produced.
If you are having any clear assumptions can you please share the complete details!
This is most likely due to the rebuild sensitivity to particular environment variables. Many build actions will read from environment variables and use them to change the outputs. Bazel keeps track of this and will rebuild seemingly unchanged remote targets when your env changes.
To demonstrate this;
Build grpc (2x ensure it is cached the second time)
Change the PATH environment variable (your IDE may do this without you knowing)
mkdir ~/bin && export PATH=$PATH:~/bin
Rebuild grpc (This should trigger a complete rebuild)
There are a couple helpful flags to combat this rebuild sensitivity, and I'd recommend adding them to your bazelrc.
incompatible_strict_action_env: Freezes your environment and doesn't source environment variables from your shell.
action_env modify environment variables as needed for you build.
# file //.bazelrc
# Don't source environment from shell
build --incompatible_strict_action_env
# Use action_env as needed for your project
build --action_env=CC=clang
I trying some test with travis-ci/travis-build.
Can i made script which include setup environment, like apt-get and start service for addon part in .travis.yml?
when i tried to compile travis compile it looks totally ignored addons part.
Some parts of the compilation of .travis.yml happens a phase earlier, when it is decided which image on which infrastructure the build should be run. The command travis build from travis-ci/travis-build makes the shell script that runs inside of the container, and there are some parts which it does not take into account because it is handled by another layer.
We have a Code Composer Studio (Eclipse) project that uses CMAKE to generate makefiles and build. The project compiles as expected when the project is manually imported onto the Jenkins slave (Win10 x64) and executed from the command line but fails when the build is handled by Jenkins. The failure always follows the same pattern: a singular letter is dropped from the path of an object file. For example, [Repo directory]/Cockpit_Scaling_and_Exceedance_data.dir becomes [Repo direcory]/Cockpit_Scaling_and_Exceedance_ata.dir and linking fails because it cannot find the referenced object file.
I made sure that there are no differences between the account environment variables and the system environment variables and have also configured the Jenkins Service to use the admin account on the slave instead of SYSTEM in order to get rid of as many differences between Jenkins and the command line as possible.
The project will build successfully using one of our other Jenkins slaves (also Win10 x64), so we know that it's not a Windows 10 issue or a problem with our Jenkins configuration. Since I can't find any differences between the configuration of the two slave machines, I was hoping that someone might be able to suggest somewhere to look for this path issue.
I never found out why the paths to object files were being mangled, but I did get the project to build successfully on the slave via Jenkins. All I did was change all of my system environment variables into user environment variables. I copy-pasted, so I know that the variables themselves did not change.
I have no idea why this corrected this issue as I had inserted a whoami call at the beginning of the build to confirm that Jenkins is indeed running as a user and not System. I guess from this point on all of my environment variables will be specific to a user and not SYSTEM...
EDIT: The problem has returned. I have made no further progress in tracking down the cause behind this issue, but I have found that I do not see this symptom when running the scripts in a bash environment instead of a Windows command prompt. Fortunately for me the scripts have all been written in such a way that they can be run in both environments, so I have had my coworkers use bash instead for them.
We are looking for ways to stop a build if it is being built for release and contains strings that reference "localhost" or containing "10.0" or other rules. does xCode have a way of enforcing these rules?
You can add a custom shell script build phase to your project.. you can run whatever shell script you want in there. You can also inspect the plethora of build environment variables, including one which should tell you if you are building for release or debug.
I am writing a script to automate my iOS building. It will be run outside of Xcode, either via Terminal or from build automating software. Is there any way to have access to Xcode environment variables in my script, so I don't have to try and derive them myself?
For example, can I get access to PROJECT_DIR instead of assuming I'm in the current directory and running pwd?
I am currently hardcoding the product names for my different build configurations. (I'm also hard coding the build configs, but I could parse them them from xcodebuild -list.) Is there a way to get the app if you know the build config name?
(I saw this related question but it doesn't have an answer for me.)
The Xcode environment variables are only defined for child processes of the xcodebuildcommand-line tool.
One solution I used is to have a very simple script as part of my build process (Project->New Build Phase->Add Run Script Build Phase). All this script does is export the necessary variables and call a script in my path somewhere.
That script could be generated by your build script before calling xcodebuild and voilĂ ! you have an external script that has access to Xcode build variables.