SSL_ERROR_SYSCALL in Azure Pipeline to access our internal TFS 2018 server - tfs

I am trying since awhile to setup an Azure Pipeline to access our internal TFS 2018 server.
I created an "Other Git" Service Connection named: TFS_PRJ, I used this intranet URL: https://tfs.mycie.com/DefaultCollection/myProject/.
For the authentication, I tried, my Windows domain account credentials as well as a PAT Token created in TFS with all access rights.
When I created the pipeline, I specified my self-hosted agent located on the same intranet and the master branch. Does this branch have an impact when accessing TFS? I can see in the logs: "Starting: Checkout TFS_PRJ#master to s". I don't see branches in TFS, should I create something in TFS to make it work?
When running the pipeline, I first have a timeout
Then it runs and after 6-7 minutes, logs shows this error: fatal: unable to access 'https://tfs.myCie.com/DefaultCollection/myProject/': OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to tfs.myCie.com:443
I understood that to access this server, the agent should not use the proxy that is currently used in pipelines accessing GitHub. This bypass is usually done by using a proxy.pac file but I don't see how to use this file in agent configuration. To enable the proxy bypass with agent files, the .proxy file contains: http://abs-proxy.myCie.com:443 and the .proxybypass file contains: myCie.com
To test that the TFS server is accessible, I logged onto the agent server as the service account and in the IE Internet options, I added *.myCie.com to the trusted sites and then I was able to access https://tfs.mycie.com/DefaultCollection/myProject/. I am also able to ping the tfs.mycie.com server
So, I have several questions:
The branch part, is it normal to use the master branch while there is no branch in TFS or does it need something more?
When I run the pipeline, it gives a timeout as it can't connect to TFS but what account and what proxy does it try to use at this point? The one defined in the service connection?
About the SSL_ERROR_SYSCALL error, is it my syntax of the .proxybypass file that is wrong? "myCie.com", do you see anything thing else that could be done ?
Can there be some settings or access rights on the TFS server that I need to have or set ?
Update 1:
Thank you for this.
I created a YAML file in a Azure Repos with this content:
trigger:
- none
pool:
name: 'myAgent'
steps:
- checkout: none
- task: CmdLine#2
inputs:
script: 'git clone -b master https://tfs.myCie.com/DefaultCollection/myProject'
Which returned:
Cloning into 'myProject'...
fatal: could not read Username for 'https://tfs.myCie.com': terminal prompts disabled
I should probably try with the PATToken in the URL...
On the agent, I added the Git folder of the agent to the path and ran:
git clone https://anything:PATTokenToMyLogin#tfs.myCie.com/defaultcollection/myProject
Which returned:
Cloning into 'myProject'...
fatal: Authentication failed for 'https://tfs.myCie.com/defaultcollection/myProject/'
Then I tried to clone it from the Team Explorer in VS2019.
I have found two lists of projects, tfs.myCie.com and "local Git repositories", I couldn't clone projects from tfs.myCie.com so I tried to clone in the local Git but it didn't worked, not sure it was the thing to do either...
I took this screenshot, could it be my TFS project that is not suited for this ?

If you do not have other branch in TFS, it is normal to use the master branch, also we can specify branch name in the Get sources tab, please check the pic below.
You could check the service connection in the project settings->Service connections. It accesses the TFS repo via service account, such as below.
According to the error message OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to : It seems to be an issue in your network settings, maybe a proxy or a firewall blocking access to the remote repository.
You should check if your TFS server is behind the firewall or a proxy. If so, please turn it off and try again. Please also try running the clone command in a local machine directly to clone the affected repository to check if that works. If the server is behind a proxy, please try to set git configs for proxy something like this : git config --global http.proxy myproxy.com:8080
You need configure the service account permission in the TFS Version Control.
Update1
Please check the sample, I disable the checkout step and add cmd to clone TFS 2018 repo, then publish it to Artifacts to check the content.
Note: The repo will save in the self-hosted agent folder, we could add task Power shell at the end and call script to delete the repo folder.
trigger:
- none
pool:
name: Default
steps:
- checkout: none
- task: CmdLine#2
inputs:
script: 'git clone -b {branch name} {TFS repo URL}'
- task: CopyFiles#2
inputs:
SourceFolder: '$(Agent.BuildDirectory)'
Contents: '**'
TargetFolder: '$(build.artifactstagingdirectory)'
- task: PublishBuildArtifacts#1
inputs:
PathtoPublish: '$(build.artifactstagingdirectory)'
ArtifactName: 'drop'
publishLocation: 'Container'
Result:
Update2
It’s mainly caused by the credentials have been remembered by Credential Manager. You should remove the credentials for https://tfs.myCie.com which have been stored in Credential Manager.
you can open Credential Manager -> Windows Credentials -> under Generic Credentials -> remove the credentials like git:https://tfs.myCie.com
In addition, please also delete the Visual Studio cache.
Note: You could also try to clone repo on a new machine.

Related

Your build failed to run: Couldn't read commit xxxxxxxx

I'm trying to build and deploy an image on GCP with cloud build.
I have a rails api repo on bitbucket running on docker, the repo is synced with google cloud repository
I configured a trigger on cloud build when a commit is made on my master branch.
Trigger Configuration:
Service account permissions:
But when the master branch gets a commit, the triggers returns the following error:
our build failed to run: <br>Couldn't read commit xxxxxxxx
Build Error:
I tried checked the GCP doc and I can't find anything. I think the issue might be on the IAM level, maybe the service account needs more credentials.
For me the Cloud Build API needed to be enabled. Once I did this my build ran, I didn't need to do anything special for permissions.
I actually edited the IAM setting for this service account to add:
Project > Editor
Source Repository > Reader
Now it works

##[error]Git fetch failed with exit code: 128

We have a Git repo on TFS and I am trying to create a pipeline using azure pipelines to connect to the TFS repos.
I get the following error:
fatal: unable to access 'http://tfs.****************': Could not resolve host: tfs.******
##[error]Git fetch failed with exit code: 128
I would suggest you first use "git clone" command line to clone remote repo.
Kindly check when you run it manually from the build agent, it work for that repo or not.
This will narrow down if the issue related to your environment or pipeline.
If you are able to use git command to connect and clone that repo.
This means there is something wrong with your build service
account. You should make sure build service account has access to that
repo. You could also directly change the service account to the one
you used to run git command.
If you are not able to do it. Then this may related to network
environment. Make sure your build agent are able to access TFS
on-premise server. Temporarily turn off firewall and any proxy. Also
try to directly use browser to login TFS web portal.
It seems that it is a self-deployment of TFS server then you need to make sure that the Server can be reached from Azure DevOps.
Based on URL in your post, I assume server is not reachable from public internet. So TFS server should be either on-prem or on a VM in Azure. So reach out to your infrastructure team to see where the server is, and how the connection could be established from build agent being used by Azure DevOps to the TFS server.

Jenkins: Git Failed to connect to repository, returned status code 128

I'm attempting to clone a remote GitHub enterprise repository and am running into the following error after adding my remote repo's URL to the Git Plugin in my Jenkins configuration:
Failed to connect to repository : Command "git.exe ls-remote -h https://<<server>>/M/AS.git HEAD" returned status code 128:
stdout:
stderr: fatal: unable to access 'https://<<server>>/M/AS.git/': Received HTTP code 502 from proxy after CONNECT
Firs of all You Need to setup github with jenkins in below section also
Go to Github --> click on profile dropdown --> settings --> devloper settings --> personal access token -->
generate new token --> select all scopes --> copy the token
Then go to Jenkins --> manage Jenkins --> github settings --> add user --> Select secret text--> paste the token
Then Test the git-hub connection by clicking test button If its Successful the Jenkins will ready to clone the GitHub repository
And also add Webhooks, Integrations & Services in GitHub
Make sure you have generated Git API Token in Git repository and added the same in the Jenkins Credentials.
If the is done, I don't think there will be any issue in connecting Git to Jenkins.
Also you can test if your Git server is able to ping your Jenkins server. (If you are running your own Git and Jenkins).
All the best.
Check your failed job environment variables.
If there is no environment variable named NO_PROXY, set one in the configuration of your JENKINS job:
NO_PROXY=.mycompany.com
Here, I assume your GitHub Enterprise has an URL like myserver.mycompany.com (replace (mycompany.com by your own)
That will avoid Jenkins trying to access the remote server through the proxy.
If you are running from a VM, make sure you install the package 'git-core'.
You must have git installed in the machine where jenkins is running
I had the same issue and for me it helped a restart for the machine where it was installed Jenkins.

hooking gitlab with jenkins

I'm trying to connect Gitlab CE 8.16 with Jenkins 2.46.1 using the GitLab hook plugin 1.4 to trigger builds when push or merge.
So I checked "Build when a change is pushed to GitLab", copied the GitLab CI Service URL: http://server:port/project/my-project and the security token, to gitlab webhook, disabled ssl verification and when I clicked on Test, I got this error :
Hook execution failed: execution expired
What am I doing wrong, please? How can I make it work?
There are a few things that are needed to make it work, there is documentation here:
https://github.com/jenkinsci/gitlab-plugin#global-plugin-configuration
So:
Make sure the jenkins user that you use on the GitLab side has the proper permissions - it needs project access and the APITOKEN needs to be there
Create the webhook on the project in GitLab that corresponds to the project in Jenkins (the Jenkins project that uses the git repo you are working with)
In GitLab, when you create webhooks to trigger Jenkins jobs, use this format for the URL and do not enter anything for 'Secret Token': https://USERID:APITOKEN#JENKINS_URL/project/YOUR_JOB
You can use a non-https link too and skip SSL verification if the certificate is not valid. Either way, the gitlab server has to be able to connect to the name and port you are using there.
Hit test and it should work, if not, you might not be able to connect to the server. Make sure your Jenkins server is listening on the URL and port that you are using, the error seems to be related to that not being right.
It's possible that GitLab server is not allowed to connect to the internet, or to the network you have the Jenkins server on, or there might be a firewall blocking the port you try to connect to (80/443) on the local Jenkins machine.
Try to do for example a curl to the Jenkins server and see what comes back:
curl http://you.jenkins.fqdn/
If you don't get something like:
<html><head><meta http-equiv='refresh' content='1;url=/loginEntry?from=%2F'/><script>window.location.replace('/loginEntry?from=%2F');</script></head><body style='background-color:white; color:white;'>
Authentication required
<!--
You are authenticated as: anonymous
Groups that you are in:
Permission you need to have (but didn't): hudson.model.Hudson.Read
... which is implied by: hudson.security.Permission.GenericRead
... which is implied by: hudson.model.Hudson.Administer
-->
</body></html>
then you cannot connect.
If it's not the Jenkins server where the issue is, you need to ask the network people that manage the server about it.
Hope that helps, good luck!
Make sure to use the latest 1.4 GitLab hook plugin (1.4.3, March 2016)
Look into your GitLab production.log, as in this issue, and see if this is a proxy configuration problem.
You should at least the context of that error message.
Here is what worked for me:
Ensure there is a merge request, even if you don't intend to actually merge any branches.
Go to branches -> select 'merge request' for a branch to merge -> create the request
Now try to test the integration.

Jenkins 0 files published after build

I have a Jenkins server setup with two jobs
first job polls the develop branch and builds the project on the jenkins server. i then have another job that polls the production branch this builds this branch on another jenkins slave which is the staging server. This job is configured so that on a successful build it should publish the artefacts over ssh to the production server.
All the SSh keys are setup and the staging server connects to production server but 0 files are transferred
using GIT_SSH to set credentials Bitbucket Repo
using .gitcredentials to set credentials
Checking out Revision 89874cc01a9f669df69817b1049b1ab98ecb19d3 (origin/Production)
SSH: Connecting from host [nginx-php-fastcgi]
SSH: Connecting with configuration [AmazonAWS] ...
SSH: Disconnecting configuration [AmazonAWS] ...
SSH: Transferred 0 file(s)
Finished: SUCCESS
I checked the staging workspace and files are being built there, just not sent to the prod server. Any suggestions??
i have also tried a different remove prefix as suggested bellow and here Jenkins transferring 0 files using publish over SSH plugin
You should remove /* from the Remove prefix line
Edit:
Your Source files cannot be outside of the job's workspace. If your files are in the root of workspace, just set it to * to transfer all workspace files, or **/* to include subdirectories. Else specify a pattern relative to ${WORKSPACE}.
Even adding a leading / will not escape that, as all it does is append that to workspace, in your case it becomes ${WORKSPACE}/var/www/workspace/opms-staging-server. Even using parent directory ../ will not work. This is for security concerns, else a job configurer could transfer private files off the Jenkins server.
If you need to get files from another job, you need to use Copy Artifacts build step. Tell me if that's your case, and I will explain further.

Resources