Continuous Integration with Blue Ocean, Github and Nuget causes path too long - jenkins

NUnit.Extension.VSProjectLoader.3.7.0
I try to get a build chain to work with Jenkins Blue Ocean where the sources are in GitHub and additional dependencies are in nuget.
When I restore packages I get the error after the specific package NUnit.Extension.VSProjectLoader.3.7.0:
Errors in packages.config projects
The specified path, file name, or both are too long. The fully qualified file name must be less than 260 characters, and the directory name must be less than 248 characters.
On the agent machine the path is very short: C:\guinode\ on top of that additional length is added making the packages folder the following size:
MyGitProject is replacing my actual project name, the length is equal.
C:\guinode\workspace\MyGitProject_master-CFRRXMXQEUULVB4YKQOFGB65CQNC4U5VJKTARN2A6TSBK5PBATBA\packages
Checking the package on the agent machine shows that NUnit.Extension.VSProjectLoader.3.7.0 was loaded completely.
Checking a local installation and replacing the first path of the package I can find two files that are 260 characters or longer.
They belong to an internal project, so I have a chance of influencing that.
None of the directories are 248 characters or more.
So the immediate solution for me is to redeploy the internal reference package.
My question for future reference is if I can do something to the packages location or something to workspace\MyGitProject_master-CFRRXMXQEUULVB4YKQOFGB65CQNC4U5VJKTARN2A6TSBK5PBATBA so that I save some characters per default.

According to the microsoft documentation it can be possible to modify the 260 length rule.
If you prefix your file with '\\?\' eg: '\\?\C:\guinode\workspace...' then long path will be in use ( a little bit more than 32000 char). I hope settings JENKINS_HOME to this kind of path make all process use that (I'm not sure)
On recent Windows version (10.1607, 2016?) there is an option in the registry to enable long path. Set 1 to the following key: HKLM\SYSTEM\CurrentControlSet\Control\FileSystem LongPathsEnabled (Type: REG_DWORD) and restart the process.

Related

GitLab: How to list registry containers with size

I have a self-hosten GitLab CE Omnibus installation (version 11.5.2) running including the container registry.
Now, the disk size needed to host all those containers increase quite fast.
As an admin, I want to list all Docker images in this registry including their size, so I can maybe let those get deleted.
Maybe I haven't looked hard enough but currently, I couldn't find something in the Admin Panel of GitLab. Before I make myself the work of creating a script to compare that weird linking between repositories and blobs directories in /var/opt/gitlab/gitlab-rails/shared/registry/docker/registry/v2 and then aggregating the sizes based on the repositories, i wanted to ask:
Is there some CLI command or even a curl call to the registry to get the information I want?
Update: This answer is deprecated by now. Please see the accepted answer for a solution built into GitLab's Rails console directly.
Original Post:
Thanks to great comment from #Rekovni, my problem is somehwat solved.
First: The huge amount of used disk space by Docker Images was due to a bug in Gitlab/Docker Registry. Follow the link from Rekovni's comment below my question.
Second: In his link, there's also an experimental tool which is being developed by GitLab. It lists and optionally deletes those old unused Docker layers (related to the bug).
Third: If anyone wants do his own thing, I hacked together a pretty ugly script which lists the image size for every repo:
#!/usr/bin/env python3
# coding: utf-8
import os
from os.path import join, getsize
import subprocess
def get_human_readable_size(size,precision=2):
suffixes=['B','KB','MB','GB','TB']
suffixIndex = 0
while size > 1024 and suffixIndex < 4:
suffixIndex += 1
size = size/1024.0
return "%.*f%s"%(precision,size,suffixes[suffixIndex])
registry_path = '/var/opt/gitlab/gitlab-rails/shared/registry/docker/registry/v2/'
repos = []
for repo in os.listdir(registry_path + 'repositories'):
images = os.listdir(registry_path + 'repositories/' + repo)
for image in images:
try:
layers = os.listdir(registry_path + 'repositories/{}/{}/_layers/sha256'.format(repo, image))
imagesize = 0
# get image size
for layer in layers:
# get size of layer
for root, dirs, files in os.walk("{}/blobs/sha256/{}/{}".format(registry_path, layer[:2], layer)):
imagesize += (sum(getsize(join(root, name)) for name in files))
repos.append({'group': repo, 'image': image, 'size': imagesize})
# if folder doesn't exists, just skip it
except FileNotFoundError:
pass
repos.sort(key=lambda k: k['size'], reverse=True)
for repo in repos:
print("{}/{}: {}".format(repo['group'], repo['image'], get_human_readable_size(repo['size'])))
But please do note, it's really static, doesn't list specific tags for an image, doesn't take into account that some layers might be used by other images as well. But it will give you a rough estimate in case you don't want to use Gitlab's tool written above. You might use the ugly script as you like, but I do not take any liability whatsoever.
The current answer should now be marked as deprecated.
As posted in the comments, if your repositories are nested, you will miss projects. Additionally, from experience, it seems to under-count the disk space used by the the repositories it finds. It will also skip repositories that are created with Gitlab 14 and up.
I was made aware of that by using the Gitlab Rails Console that is now available: https://docs.gitlab.com/ee/administration/troubleshooting/gitlab_rails_cheat_sheet.html#registry-disk-space-usage-by-project
You can adapt that command to increase the number of projects it will find as it's only looking at the 100 last projects.

How to "reduce" Jenkins Pipeline output path

We were building our solution without any "Pipeline" in Jenkins until recently, so I'm currently in the progress to move our build to multibranch pipelines.
The issue that I'm running into is that we have a lot of structure une our solution(lot of subfolder, and sometimes some big names).
Currently, the jenkins pipeline extract everything in a folder that looks like:
D:\ws\ght-build_feature_pipelines-TMQ33LB5OQIQ5VXVMFKFDG2HWCD4MUOGEGUWJUOMZ5D2GI42BIQA
Which is very-long, and now we are reaching the 260 characters limit of MSBuild:
C:\Program Files (x86)\Microsoft Visual
Studio\2017\Professional\MSBuild\15.0\Bin\Microsoft.Common.CurrentVersion.targets(2991,5):
error MSB3553: Resource file
"obj\Release\xx.aaaaaaaaaa.yyy.bbbbbb.dddddddddddddd.yyyyyyy.vvv.dddddddddd.Resources.resources"
has an invalid name. The item metadata "%(FullPath)" cannot be applied
to the path
"obj\Release\xx.aaaaaaaaaa.yyy.bbbbbb.dddddddddddddd.yyyyyyy.vvv.dddddddddd.Resources.resources".
The specified path, file name, or both are too long. The fully
qualified file name must be less than 260 characters, and the
directory name must be less than 248 characters.
[D:\ws\ght-build_feature_pipelines-TMQ33LB5OQIQ5VXVMFKFDG2HWCD4MUOGEGUWJUOMZ5D2GI42BIQA\Src\bbbbbb\dddddd\dddddddddddddd\yyyyyyy\xx.aaaaaaaaaa.yyy.bbbbbb.dddddddddddddd.yyyyyyy.vvv\xx.aaaaaaaaaa.yyy.bbbbbb.dddddddddddddd.yyyyyyy.vvv.csproj]
We have so much cases where the length is big that it's really a big job to refactor everything, so I'm looking on how to specify to jenkins a smaller path?
What I finally did:
pipeline {
agent {
node{
label 'windows-node'
customWorkspace "D:\\ws\\${env.BRANCH_NAME}"
}
}
options{
skipDefaultCheckout()
}
...
}
And I've a step that does the checkout. It was easier for me to have a "per-job" behavior, without touching jenkins global settings.
Update (for any recent Jenkins instances)
Turns out that with recent Jenkins versions PATH_MAX seems to be ignored.
The only thing it does: Issue a warning in the Jenkins log when smaller than a certain value, which actually does not matter - as the setting itself will anyways be ignored (as seen on Jenkins 2.249.3). See also: JENKINS-2111
As far as I can tell - the new setting was introduced in jenkins-branch-api 2.0.21:
There's a new property introduced: MAX_LENGTH.
This defaults to 32 characters by default.
You can set it the same way like PATH_MAX:
As a java property - to ensure that Jenkins will start using the right setting, e.g.:
-Djenkins.branch.WorkspaceLocatorImpl.MAX_LENGTH=40
or during run-time, using the script console:
jenkins.branch.WorkspaceLocatorImpl.MAX_LENGTH=40
For older Jenkins instances
Actually there's a java property you can set to specify the length of the directory name, e.g.:
-Djenkins.branch.WorkspaceLocatorImpl.PATH_MAX=20
To make it permanent you have to specify this property in the Jenkins java startup configuration file.
You may also read and write this property using the Jenkins script console for temporary changes or to just give it a try as it takes effect immediately, e.g.
println jenkins.branch.WorkspaceLocatorImpl.PATH_MAX
jenkins.branch.WorkspaceLocatorImpl.PATH_MAX = 20
println jenkins.branch.WorkspaceLocatorImpl.PATH_MAX
Setting this value to 0 changes the path generation behavior.
For details please check:
https://issues.jenkins-ci.org/browse/JENKINS-34564
https://issues.jenkins-ci.org/browse/JENKINS-38706

Different checksum results for jar files compiled on subsequent build?

I am working verifying the jar files present on remote unix boxes with that of built on local machine(Windows & Cygwin) with same JVM.
As a POC I am trying to verify if same checksum is produced with jar files generated on my machine with consecutive builds, I tried below,
Generated the jar file first time using ant script
Calculated the checksum (e.g. "xyz abc")
Generated the jar file again with same ant script without changing anything
I got different checksum but same byte count (e.g. "xvw abc")
I am not sure how java internal processes produce the class files and then the jar files, Can someone please help me understand below points
Does the cksum utility of unix/cygwin consider timestamp of the file while coming up with the value?
Will the checksum be different for compiled class files/jar file produced if we keep every other things same [Compiler version + sourcecode + machine + environment]?
Answer to question 1: cksum doesn't consider the timestamp of the archive (e.g. jar-file) but it does consider the timestamps of the files inside the jarfile.
Answer to question 2: The checksums of the individual class-files will be the same with all other things the same (source-code, compiler etc.) The checksums of the jar-files will be different. Causes of differences can be the timestamp of the files inside the jarfile or if files are put into the archive in different orders (e.g. caused by parallel builds).
If you want to create a reproducible build with gradle you can do so with the config below:
tasks.withType(AbstractArchiveTask) {
preserveFileTimestamps = false
reproducibleFileOrder = true
}
Maven allows something similar, sorry I don't know how to do this with ant..
More info here:
https://dzone.com/articles/reproducible-builds-in-java
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=74682318

result from /proc/self/exe is unfriendly in a clearcase view

If I execute a binary in a clearcase view, and look at /proc/self/exe for that on Linux, I see something like the following:
$ cd /proc/19220
$ ls -l exe
lrwxrwxrwx 1 peeterj pdxdb2 0 2012-11-30 13:04 exe -> /home/peeterj/views/peeterj_clang-7.vws/.s/00024/8000028250b8f1d1llvm-config
The clang llvm-config program, not unreasonably, uses this output to try to figure out the absolute fully qualified path that it is located in (I assume in case argv[0] isn't fully qualified).
Is there a way to find the location within the view that this corresponds to. For example, in this case, the llvm-config exe is actually in:
/vbs/bldsupp/linuxamd64/clang/debug/bin
(I'm wondering if it's feasible to modify clang's GetExecutablePath() function to figure this out.)
No trivial solution here (for an old version of ClearCase though):
The technote "PK27447: WITHIN A CLEARCASE DYNAMIC VIEW, THE READLINK() CALL ON LINUX RETURNS THE WRONG PATH FOR THE EXECUTABLE'S /PROC/SELF/EXE VALUE" suggests:
Local fix
Use getcwd(), get_current_dir_name(), getwd() in applications slated for a VOB/View context
Create an interposer library to intercept the readlink() call, and modify to use any of the above calls to return the proper data
The cause:
/proc/self/exe returns the improper path while getcwd succeeds.
Unfortunately, for /proc/self/exe to return the proper value [from within a VOB/View context] would require a change within the Linux kernel to allow MVFS to "override" the present setting.
IBM LTC has been working on having the Linux community adopt this change so that we can then incorporate the new features within MVFS.
Related: Bug Sun 6189256.

How to get Excel version and macro security level

Microsoft has recently broken our longtime (and officially recommended by them) code to read the version of Excel and its current omacro security level.
What used to work:
// Get the program associated with workbooks, e.g. "C:\Program Files\...\Excel.exe"
SHELLAPI.FindExecutable( 'OurWorkbook.xls', ...)
// Get the version of the .exe (from it's Properties...)
WINDOWS.GetFileVersionInfo()
// Use the version number to access the registry to determine the security level
// '...\software\microsoft\Office\' + VersionNumber + '.0\Excel\Security'
(I was always amused that the security level was for years in an insecure registry entry...)
In Office 2010, .xls files are now associated with "“Microsoft Application Virtualization DDE Launcher," or sftdde.exe. The version number of this exe is obviously not the version of Excel.
My question:
Other than actually launching Excel and querying it for version and security level (using OLE CreateOLEObject('Excel.Application')), is there a cleaner, faster, or more reliable way to do this that would work with all versions starting with Excel 2003?
Use
function GetExcelPath: string;
begin
result := '';
with TRegistry.Create do
try
RootKey := HKEY_LOCAL_MACHINE;
if OpenKey('SOFTWARE\Microsoft\Windows\CurrentVersion\App Paths\excel.exe', false) then
result := ReadString('Path') + 'excel.exe';
finally
Free;
end;
end;
to get the full file name of the excel.exe file. Then use GetFileVersionInfo as usual.
As far as I know, this approach will always work.
using OLE CreateOLEObject('Excel.Application'))
you can get installed Excel versions by using the same registry place, that this function uses.
Basically you have to clone a large part of that function registry code.
You can spy on that function call by tools like Microsoft Process Monitor too see exactly how does Windows look for installed Excel - and then to do it exactly the same way.
You have to open registry at HKEY_CLASSES_ROOT\ and enumerate all the branches, whose name starts with "Excel.Application."
For example at this my workstation I only have Excel 2013 installed, and that corresponds to HKEY_CLASSES_ROOT\Excel.Application.15
But on my another workstation I have Excel 2003 and Excel 2010 installed, testing different XLSX implementations in those two, so I have two registry keys.
HKEY_CLASSES_ROOT\Excel.Application.12
HKEY_CLASSES_ROOT\Excel.Application.14
So, you have to enumerate all those branches with that name, dot, and number.
Note: the key HKEY_CLASSES_ROOT\Excel.Application\CurVer would have name of "default" Excel, but what "default" means is ambiguous when several Excels are installed. You may take that default value, if you do not care, or you may decide upon your own idea what to choose, like if you want the maximum Excel version or minimum or something.
Then when for every specific excel branch you should read the default key of its CLSID sub-branch.
Like HKEY_CLASSES_ROOT\Excel.Application.15\CLSID has nil-named key equal to
{00024500-0000-0000-C000-000000000046} - fetch that index to string variable.
Then do a second search - go into a branch named like HKEY_CLASSES_ROOT\CLSID\{00024500-0000-0000-C000-000000000046}\LocalServer ( use the fetched index )
If that branch exists - fetch the nil-named "default key" value to get something like C:\PROGRA~1\MICROS~1\Office15\EXCEL.EXE /automation
The last result is the command line. It starts with a filename (non-quoted in this example, but may be in-quotes) and is followed by optional command line.
You do not need command line, so you have to extract initial commanlind, quoted or not.
Then you have to check if such an exe file exists. If it does - you may launch it, if not - check the registry for other Excel versions.

Resources