Intermittent CLI API 500 when Downloading File - jenkins

I'm using a gdrive CLI command on a Jenkins web node to automatically download a file from Google Drive during a build process.
This use to work, however, recently (As of a week or two ago) the command to download the file intermittently started producing 500 errors with no message.
The command that's being run is: gdrive download query "name = '16.7.3.zip'".
Sometimes the above command downloads the file, but, sometimes it doesn't. Here's an example of the error output:
Is anyone able to give advice on where to start with this issue, is it something on Google's side?
I've read a few articles explaining that this might be throttling from the API, however, I'd have expected a different error code, i.e. 403 with the error "The download quota for this file has been exceeded.".
I have the following specs installed:
gdrive: 2.1.0
Golang: go1.6
OS/Arch: linux/amd64

Intermittent 500 errors are allowed for in the Google Drive API. You simply need to do an exponential backoff and retry. Generally they are caused by internal timeouts within the Drive infrastructure. Sometimes these are related to service problems, other times they can be caused by a request that causes a large amount of work.

Related

HTTP Error 500.37 - ANCM Failed to Start Within Startup Time Limit aspnetcore6.0

I am facing below issue while switch VS2019 to VS2022. I am not able to run my API project its throwing below error. I Search on google and look many articals but not got success. Please suggest to resolve this problem. I checked and found that .netCore upper version not support to old version code.
HTTP Error 500.37 - ANCM Failed to Start Within Startup Time Limit
Common solutions to this issue:
ANCM failed to start after -1 milliseconds
Troubleshooting steps:
Check the system event log for error messages
Enable logging the application process' stdout messages
Attach a debugger to the application process and inspect

Getting 503 Error on Google Dataflow - UNAVAILABLE

I'm trying to run apache beam python pipeline on dataflow but immediately (10-15sec) after launching the job, it gets failed status.
The error on logs:
Failed to start the VM, launcher-2021030302333314603154945777358700, used for launching because of status code: UNAVAILABLE, reason: One or more operations had an error: 'operation-1614767615027-5bc9f6216a93c-7b50752f-842a8707': [UNAVAILABLE] 'HTTP_503'..
The error message gets cut short so I cannot dig into deeper. I believe I added all relevant permissions etc. but cant get it to work. Initial research suggests that it could be a backend issue or permissions issue?
In addition the same pipeline has worked in other projects.
Appreciate if someone can help me debug and fix.
It is because of the region. I move the region from 'asia-southeast2' to 'us-central1' and it worked.

Jenkins doesn't update GitHub check status sometimes

I'm using Jenkins 2.15 (GitHub plugin 1.29.3) based CI for my GitHub core repo. It works fine, but sometimes Jenkins build doesn't update GitHub check status.
I see nothing relevant into Jenkins log.
Any idea how to debug and hopefully fix this issue?
As I know, check status update is just an http request to the status api: https://developer.github.com/v3/repos/statuses/
I experienced a similar behavior with a database. The client application and the database had no errors. Each one was on a different host.
What I did was, create a bash script in host A to perform a ping to host B.
ping www.host_B.com | while read pong; do echo "$(date): $pong"; done >> /tmp/ping-test-$(date +%F).log
Then, when the sporadic error related to the connection of the database occurred, the log file helped me to detect that the error was related to:
Network issues
Latency issues
Internet service provider issues
In your case, you could perform a simple curl to the status api and compare to the sporadic behavior detected.

SVN Server Not Responding to Write Requests

I am in the process of trying to set up an SVN repo using an apache web server. I was able to get the repo created and configured without too many problems. I can reach the repo via the browser, so I think the apache configuration is correct. The problem comes when I try to do the initial commit. When I run the commit command in the terminal, it hangs for several minutes before returning svn: E175012: Connection timed out. The initial commit is a single file, less than 100kb. Even more strange, after the command times out, it seems to create an HTTPd process on my system that uses 90% of the CPU.
I did some research to see if I could solve the problem myself, but so far nothing has worked. I was able to use Charles Proxy to monitor the HTTP requests and it looks like the svn client is sending the POST but it is never receiving a response from the server. After the default timeout (10 minutes) the client gives up and displays the timeout error.
I also tried setting up the repo using SvnServe instead of apache. I was able to read and write to the repo using svn://. However, the code I am working on expects to communicate with the repo over HTTP, so I still need to figure out what the problem is with apache.
Does anyone know what could be causing this issue? Are there any other steps I can take to troubleshoot the problem for myself?
[Update]
I checked the logs for my apache server. Here is what I'm seeing when I run the commit:
_myip_ - - [28/Feb/2017:10:04:04 -0500] "OPTIONS /my/repo HTTP/1.1" 200 190 "-" "SVN/1.9.5 (x86_64-apple-darwin16.1.0) serf/1.3.9"
_myip_ - - [28/Feb/2017:10:04:04 -0500] "OPTIONS /my/repo HTTP/1.1" 200 97 "-" "SVN/1.9.5 (x86_64-apple-darwin16.1.0) serf/1.3.9"
[Update 2]
In an attempt to further narrow down the cause of this issue, I tried setting up a different apache server in a Linux virtual machine. That server worked perfectly, and I was even able to read/write to it from osx. So it would seem that the issue is something specific to the apache server on OSX.
Please try this.
$ sudo chmod -R 775 /var/lib/svn
Reference URL-: https://gotechnies.com/setup-svn-server-ubuntu/

Getting 502 bad request after deploying Play 2.1.0 app to Cloudbees

I tried to deploy a Play app to Cloudbees (only via push to git repo from which it is built by jenkins), it compiled and should work but I get a "502 Bad Gateway" error when loading the app. There is no error shown in the console only that it answers "502 Bad Gateway" when trying to access it. But that's what I see in the browser, too.
Cloudbees say that there is no other manipulation necessary, just cloning/pulling the ClickStart-Project, making it you application and pushing it back. The Play project works fine locally.
I am very grateful for any help. Please let me know if I need to provide any other information. Thanks a lot!
Edit: It works fine with Heroku only adding a Procfile. I don't get the problem with Cloudbees...
In this case the error is due to the database needing evolutions to be run before it can start:
[warn] play - Run with -DapplyEvolutions.default=true and -DapplyDownEvolutions.default=true if you want to run them automatically (be careful)
Oops, cannot start the server.
#6eg39l651: Database 'default' needs evolution!
You can see the error in your application console:
https://run.cloudbees.com/a/strehlst#app-manage/logs:strehlst/odzh
or via bees app:tail if you have the bees CLI installed.
You can also deploy direct from your desktop if you like:
play dist
bees app:deploy -t play2 dist/yourapp.zip
And it will push direct to your app (if you don't want a continuous deployment pipeline).

Resources