I have a test that restarts every time without any change in the code.
I use
bazel test --explain ~/bazel-explain.log --verbose_explanations <test-target-name>
to start the test.
Build options: <truncated>
Executing action 'BazelWorkspaceStatusAction stable-status.txt': unconditional execution is requested.
Executing action 'FileWrite build-info-volatile.h': unconditional execution is requested.
Executing action 'Testing <test-name>': One of the files has changed.
So the only useful information here is "One of the files has changed".
How to understand what file specifically has changed?
What does "file changed" means in bazel? Does bazel compare a hash of the file (or directory)? Does file change in terms of bazel if only the file's attributes are changed (such as the editing time)?
I found the cause. I use Zeppelin notebook as an external dependency. When Zeppelin runs, it changes the files inside its directory: it adds logs and modifies interpreters.json config. Every next run of the test the zeppelin directory is different from the previous snapshot. That's why bazel restarts the test every time.
When I remove the logs and revert the changes in the interpreters.json file, bazel does not restart the test. Now I will try to find out how to configure Zeppelin to use another directory for the files changing in run-time.
Unfortunately, I have not found bazel's tools or options that would help me to find the cause of the issue. I had to manually diff the zeppelin directory's snapshots.
Related
I observe that my Bazel build agent frequently builds the project from scratch (including compiling grpc, which keeps unchanged) instead of taking results from cache. Is there a way, like query or cquery (pardon my ignorance) to determine why is the cache considered invalid for particular target? Or any techniques to tackle cache invalidation problem?
This is How the bazel build works :
When running a build or a test, Bazel does the following: Loads the BUILD files relevant to the target. Analyzes the inputs and their dependencies, applies the specified build rules, and produces an action graph. Executes the build actions on the inputs until the final build outputs are produced.
If you are having any clear assumptions can you please share the complete details!
This is most likely due to the rebuild sensitivity to particular environment variables. Many build actions will read from environment variables and use them to change the outputs. Bazel keeps track of this and will rebuild seemingly unchanged remote targets when your env changes.
To demonstrate this;
Build grpc (2x ensure it is cached the second time)
Change the PATH environment variable (your IDE may do this without you knowing)
mkdir ~/bin && export PATH=$PATH:~/bin
Rebuild grpc (This should trigger a complete rebuild)
There are a couple helpful flags to combat this rebuild sensitivity, and I'd recommend adding them to your bazelrc.
incompatible_strict_action_env: Freezes your environment and doesn't source environment variables from your shell.
action_env modify environment variables as needed for you build.
# file //.bazelrc
# Don't source environment from shell
build --incompatible_strict_action_env
# Use action_env as needed for your project
build --action_env=CC=clang
I am completely new with Jenkins and have this question. So I created two projects on Jenkins. The first project will fetch from github if it notices any kind of change on my repo and will store a local copy on my machine.
The second project will start only if the first project is stable. If it is stable, the second project will first compile the all the java files (which I put in a compile.bat file), then it will run the testng.xml file which runs junit test and selenium test (run.bat file).
So what I want to do is if there is no compilation errors on compile.bat, then proceed on run.bat, but right now even though there are some errors on the java file and when the compile.bat runs, it catches those errors, Jenkins still proceeds to the run.bat file and at the end passes the build. I want the build to fail when there is any kind of errors.
Here is the link to my repo for the batch file and other files if that helps:
https://github.com/na2193/Demo
I had figured it out. You need to use conditional step(single) and there you can define what you want to do
I want to access and grep Jenkins Console Output as a post build step in the same job that creates this output. Redirecting logs with >> log.txt is not a solution since this is not supported by my build steps.
Build:
echo "This is log"
Post build step:
grep "is" path/to/console_output
Where is the specific log file created in filesystem?
#Bruno Lavit has a great answer, but if you want you can just access the log and download it as txt file to your workspace from the job's URL:
${BUILD_URL}/consoleText
Then it's only a matter of downloading this page to your ${Workspace}
You can use "Invoke ANT" and use the GET target
On Linux you can use wget to download it to your workspace
etc.
Good luck!
Edit:
The actual log file on the file system is not on the slave, but kept in the Master machine. You can find it under: $JENKINS_HOME/jobs/$JOB_NAME/builds/lastSuccessfulBuild/log
If you're looking for another build just replace lastSuccessfulBuild with the build you're looking for.
Jenkins stores the console log on master. If you want programmatic access to the log, and you are running on master, you can access the log that Jenkins already has, without copying it to the artifacts or having to GET the http job URL.
From http://javadoc.jenkins.io/archive/jenkins-1.651/hudson/model/Run.html#getLogFile(), this returns the File object for the console output (in the jenkins file system, this is the "log" file in the build output directory).
In my case, we use a chained (child) job to do parsing and analysis on a parent job's build.
When using a groovy script run in Jenkins, you get an object named "build" for the run. We use this to get the http://javadoc.jenkins.io/archive/jenkins-1.651/hudson/model/Build.html for the upstream job, then call this job's .getLogFile().
Added bonus; since it's just a File object, we call .getParent() to get the folder where Jenkins stores build collateral (like test xmls, environment variables, and other things that may not be explicitly exposed through the artifacts) which we can also parse.
Double added bonus; we also use matrix jobs. This sometimes makes inferring the file path on the system a pain. .getLogFile().getParent() takes away all the pain.
You can install this Jenkins Console log plugin to write the log in your workspace as a post build step.
You have to build the plugin yourself and install the plugin manually.
Next, you can add a post build step like that:
With an additional post build step (shell script), you will be able to grep your log.
I hope it helped :)
Log location:
${JENKINS_HOME}/jobs/${JOB_NAME}/builds/${BUILD_NUMBER}/log
Get log as a text and save to workspace:
cat ${JENKINS_HOME}/jobs/${JOB_NAME}/builds/${BUILD_NUMBER}/log >> log.txt
For very large output logs it could be difficult to open (network delay, scrolling). This is the solution I'm using to check big log files:
https://${URL}/jenkins/job/${jobName}/${buildNumber}/
in the left column you see: View as plain text. Do a right mouse click on it and choose save links as. Now you can save your big log as .txt file. Open it with notepad++ and you can go through your logs easily without network delays during scrolling.
I found the console output of my job in the browser at the following location:
http://[Jenkins URL]/job/[Job Name]/default/[Build Number]/console
This is designed for use when you have a shell script build step. Use only the first two lines to get the file name.
You can get the console log file (using bash magic) for the current build from a shell script this way and check it for some error string, failing the job if found:
logFilename=${JENKINS_HOME}/${JOB_URL:${#JENKINS_URL}}
logFilename=${logFilename//job\//jobs\/}builds/${BUILD_NUMBER}/log
grep "**Failure**" ${logFilename} ; exitCode=$?
[[ $exitCode -ne 1 ]] && exit 1
You have to build the file name by taking the JOB_URL, stripping off the leading host name part, adding in the path to JENKINS_HOME, replacing "/job/" to "/jobs/" to handle all nested folders, adding the current build number and the file name.
The grep returns 0 if the string is found and 2 if there is a file error. So a 1 means it found the error indication string. That makes the build fail.
Easy solution would be:
curl http://jenkinsUrl/job/<Build_Name>/<Build_Number>/consoleText -OutFile <FilePathToLocalDisk>
or for the last successful build...
curl http://jenkinsUrl/job/<Build_Name>/lastSuccessfulBuild/consoleText -OutFile <FilePathToLocalDisk>
I have a TFS build in a Git team project that uses the default template. It builds a .proj file containing a single target that executes a .PS1 file in Powershell.exe.
The .PS1 generates its own log file. I have been trying to figure out how to get this file to copy to the drop directory \logs folder. From what I can tell, TFS only copies specific files to this output directory:
ActivityLog.AgentScope.[id].xml
ActivityLog.xml
build.log
Anyone tried getting custom logging info to this directory? I tried writing to build.log but that failed with errors.
I like #MrHinsh's answer better than mine, but I found that you can write to a file at this location: $(TF_BUILD_DROPLOCATION)\logs during build.
I assumed that since the path doesn't exist until the log files are copied it would not work. But it does... the TFS/MSBuild log files are simply merged in. And it even seemed to work with a name conflict. For example, if your file is named build.log, MSBuild's will be renamed to build.01.log.
In your PowerShell you can easily execute Host-Write to write to the Build log. All of the standers output methods are captured, although you need to use the "-verbose" tag to get the text to always write.
I'm running an ant build through Jenkins and on the stage where it is deploying to windows-share its returning the following error:
Failed to copy FILE to FILE2 due to failed to create the parent directory for FILE2 (I've taken the paths out to keep the question shorter).
I'm guessing that there might be some problem with permissions with the jenkins default user but this problem has only just started occurring, and any help would be great.
Thanks
This is a pretty old question but I thought I'd come back and complete it with a short update as to what was actually going on. Someone had changed the password for the user that was logged on to the vm that jenkins was running on, and when it was trying to create the directory to stick the files into it was running into permissions errors. Only problem was the error message wasn't very descriptive.
So in the end this was an infrastructure problem rather than anything to do with the ant script.
I take it you're doing something like this:
<copy file="${from.dir}/${from.file}"
tofile="${to.dir}/${to.file}"/>
And, you're getting an error that ${to.dir} doesn't exist.
In earlier versions of Ant, you definitely had to create the directory before doing a copy:
<mkdir dir="${to.dir}"/>
<copy file="${from.dir}/${from.file}"
tofile="${to.dir}/${to.file}"/>
I think I also noticed that later versions of Ant will create the directory for you when that directory didn't exist. I've always been in the habit of putting <mkdir/> in front of any task that creates a new file in a new directory, including things like <zip/> and <tar/>.
Here are some questions:
Do your users also use Ant to run their builds? I know that this isn't always the case. Many users use Eclipse and don't bother with Ant.
Is the version of Ant your users have vs. what your users have the same?
Do you do a clean? Jenkins can emulate a clean either by doing an update, reverting and removing unversioned element, or by simply creating a brand new working directory. If your users don't remove the destination directory where ${to.file} is being placed, it might work locally, but not on Jenkins.
Can you manually run Ant from the Jenkins working directory. If so, what results do you get? (Remember to disable this Jenkins job before doing this. You don't want Jenkins to do a build while you're experimenting in the working directory.)