I am in the process of trying to set up an SVN repo using an apache web server. I was able to get the repo created and configured without too many problems. I can reach the repo via the browser, so I think the apache configuration is correct. The problem comes when I try to do the initial commit. When I run the commit command in the terminal, it hangs for several minutes before returning svn: E175012: Connection timed out. The initial commit is a single file, less than 100kb. Even more strange, after the command times out, it seems to create an HTTPd process on my system that uses 90% of the CPU.
I did some research to see if I could solve the problem myself, but so far nothing has worked. I was able to use Charles Proxy to monitor the HTTP requests and it looks like the svn client is sending the POST but it is never receiving a response from the server. After the default timeout (10 minutes) the client gives up and displays the timeout error.
I also tried setting up the repo using SvnServe instead of apache. I was able to read and write to the repo using svn://. However, the code I am working on expects to communicate with the repo over HTTP, so I still need to figure out what the problem is with apache.
Does anyone know what could be causing this issue? Are there any other steps I can take to troubleshoot the problem for myself?
[Update]
I checked the logs for my apache server. Here is what I'm seeing when I run the commit:
_myip_ - - [28/Feb/2017:10:04:04 -0500] "OPTIONS /my/repo HTTP/1.1" 200 190 "-" "SVN/1.9.5 (x86_64-apple-darwin16.1.0) serf/1.3.9"
_myip_ - - [28/Feb/2017:10:04:04 -0500] "OPTIONS /my/repo HTTP/1.1" 200 97 "-" "SVN/1.9.5 (x86_64-apple-darwin16.1.0) serf/1.3.9"
[Update 2]
In an attempt to further narrow down the cause of this issue, I tried setting up a different apache server in a Linux virtual machine. That server worked perfectly, and I was even able to read/write to it from osx. So it would seem that the issue is something specific to the apache server on OSX.
Please try this.
$ sudo chmod -R 775 /var/lib/svn
Reference URL-: https://gotechnies.com/setup-svn-server-ubuntu/
Related
For a few days now our Jenkins server is returning "HTTP ERROR 404 Not Found" from jetty. The interesting behavior is, if I reload the page a couple of times (5-20 times) then suddenly the Jenkins UI appears, but on the next click it is "HTTP ERROR 404 Not Found" again. Jenkins runs in a container on k3s. The Jenkins logs do not show any issues, the java process does not crash. I tried the latest Jenkins version and a few older ones (all alpine-based). Until last week it has been working for several months without problems. Any ideas ?
The problem here was the Traefik ingress configuration. I did use
- path: /jenkins
which worked fine for many months, after a reboot it did not anymore. When I changed it to
- path: /
it worked again. I do not understand why the behavior of traefik changed, but if someone runs into the same issue, maybe this post helps.
I'm using Jenkins 2.15 (GitHub plugin 1.29.3) based CI for my GitHub core repo. It works fine, but sometimes Jenkins build doesn't update GitHub check status.
I see nothing relevant into Jenkins log.
Any idea how to debug and hopefully fix this issue?
As I know, check status update is just an http request to the status api: https://developer.github.com/v3/repos/statuses/
I experienced a similar behavior with a database. The client application and the database had no errors. Each one was on a different host.
What I did was, create a bash script in host A to perform a ping to host B.
ping www.host_B.com | while read pong; do echo "$(date): $pong"; done >> /tmp/ping-test-$(date +%F).log
Then, when the sporadic error related to the connection of the database occurred, the log file helped me to detect that the error was related to:
Network issues
Latency issues
Internet service provider issues
In your case, you could perform a simple curl to the status api and compare to the sporadic behavior detected.
So I have a rails application that I built and deployed via AWS Elastic Beanstalk a few months ago. The project was put on hold so I terminated the environment, expecting to be able to re-deploy when we returned to this project.
Despite my app still running just fine on my local dev environment, I cannot get it to deploy. The error from my eb-activty.log:
PG::ConnectionBad: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
The database is a standalone AWS RDS instance that I can successfully test the connection to, so I know its running. I have added the requisite environment variables and configured my database.yml accordingly. To be clear, this is an application that used to work. I hadn't made any changes between the time I terminated the environment and when I went to re-deploy.
The root problem seems to be that nginx isn't being configured properly, as trying to access the server returns:
502 Bad Gateway
nginx/1.12.1
and when I check the nginx error.log its filled with errrors like this:
2018/09/19 14:12:35 [crit] 3069#0: *653 connect() to unix:///var/run/puma/my_app.sock failed (2: No such file or directory) while connecting to upstream, client: 172.31.47.147, server: _, request: "GET / HTTP/1.1", upstream: "http://unix:///var/run/puma/my_app.sock:/", host: "172.31.47.147"
Naturally, I googled my error, and found this stackoverflow post.
I've tried adding these suggested lines from the top-rated answer to my puma.rb
bind "unix:///var/run/puma/my_app.sock"
pidfile "/var/run/puma/my_app.sock"
Which caused no change at all.
I made sure to try the other suggestions, including having a direct look at the nginx configuration file. I did find that there's no upstream set up in the config. As best as I can see, the nginx aspect of the deployment pipeline is automated by Elastic Beanstalk so clearly something else I've set must be incorrect.
I've found that under no circumstances can I get the app to deploy using eb deploy I can only make changes by creating a new environment each time. I've recreated the app countless time, experimenting with different settings, versions of gems and packages, different ruby versions...etc. All in all, I still can't affect change on the error, I can't even get a new error! just the same PG:ConnectionBad or 502 bad gateway depending on if I look from console or browser.
From my googling I've come under the impression that this is related to puma in some regard but puma is a bit of a black box for me.
I'm feeling pretty lost here, I'd really appreciate any guidance you'd be willing to share. Feel free to ask for more info from any log or file, I'm happy to provide more detail. Thanks in advance!
Could be an RDS security group, is it configured to reach your elb?
You could also try cloning the db, to make sure its not sure weird database issue with old one, and try connecting to that.
So this wont be a very helpful answer as I never did resolve the problem. I didn't want to just leave this thread hanging though.
I ended up just creating a new rails environment, re-adding all the gems and porting my controllers/views/models/routes. Once I did that I was able to deploy without issue.
I can confirm that the issue wasn't with the security groups or the database itself. The fresh rails app was able to access the RDS instance without issue.
Thank you all for your comments and attemtps to help, it is much appreciated!
When attempting to run Boot inside Docker, using the adzerk/boot-clj image, I receive connection refused errors.
Specifically, when the container starts up, boot is started, and then a stack trace is output. The trace (which is not easy to copy and paste between computers with no connectivity) essentially is to do with downloading - https://github.com/boot-clj/boot/releases/download/2.7.2/boot.jar - and receiving "Connection refused" errors.
I’m asking, and answering this, question in the hope that it might help someone else.
Where to start?
My main problem was with a Docker + Clojure + Boot setup, specifically when running “boot” from inside the container. Doing this spewed out a stack trace. This is where my journey begins.
I’m using the adzerk/boot-clj image. I’ve used it locally (OSX) without issue, the problem I experienced was in using a VM (CentOS 7) hosted within a corporate data center.
docker run -ti adzerk/boot-clj
Issuing this starts up the container, the entry point is Boot, and it starts pulling down some jars, specifically boot.jar from Github. The resulting stack trace details several problems, but the crux of it was
“java.net.ConnectException: Connection refused” (connecting to Clojars.org:443)
Hmmm…
So instead of running Boot straight away in the container, I specified the container entry point as “—-entrypoint bash” so I can prod around a little.
So, wget - connection refused.
What about without Docker in the way. Same thing. Connection refused.
After a little wrangling with the network team, I found that the “https_proxy” env variable needs to be set on CentOS to route traffic out to the internet. A very specific issue to me in the situation.
However….
wget is now fine, both on the host, and inside the adzerk/boot-clj container. Boot however was not.
In an effort to simplify things even more, I took Docker out of the equation entirely, and used boot locally.
Installed java-1.8.0-openjdk.x86_64, installed Boot. Same problem.
So dug around a little, and found this - https ://github.com/boot-clj/boot-bin/issues/2
This was a start. It mentions setting the BOOT_JVM_OPTIONS, specifically https.proxyHost and https.proxyPort.
It still didn’t work… Arrrg.
OK, let’s take Boot out of the equation.
I wrote a test harness in Java, very simple that connects to https ://clojars.org and attempts to read the index page. Copied from https ://docs.oracle.com/javase/tutorial/networking/urls/readingWriting.html, and setting the JVM_OPTS.
It still fails. “Connection refused”
…. Weird beard.
I finally stumbled on this SO - https ://stackoverflow.com/questions/43695299/java-httpurlconnection-works-on-windows-and-fails-on-linux - specifically the answer from Stephen C
“Java doesn't necessarily respect your system's default proxy settings. Since you are able to "curl" the URL on the Linux machine, the most likely explanation is that Java is not using the proxy that you have configured. The following links explains various ways to configure the proxies for Java:”
So taking the first link - https ://stackoverflow.com/questions/120797/how-do-i-set-the-proxy-to-be-used-by-the-jvm - and the answer from Leonel
I issued “java -Dhttps.proxyHost=xxx -Dhttps.proxyPort=80 HelloWorld”
I get an error, but a different one. This is progress. “Unable to tunnel through proxy”
A quick Google of this led me here: http ://www.oracle.com/technetwork/java/javase/8u111-relnotes-3124969.html - “Disable Basic authentication for HTTPS tunneling”
So updated to “java -Dhttps.proxyHost=xxx -Dhttps.proxyPort=80 -Djdk.http.auth.tunneling.disabledSchemes=“” HelloWorld
Profit.
Info:
java -v
openjdk version 1.8.0_144
Openjdk Runtime Environment (build 1.8.0_144-b01)
OpenJDK 64-Bit Server VM (build 25.144-b01, mixed mode)
Sorry for all my profanity Boot.
I have setup a proxy in the build file which was working perfectly on
my mac at work. But on my Ubuntu 11.04 laptop at home the proxy seems
to never return a valid response (checking with SC.ok(response)).
I have checked by curling the url:
curl -G http://localhost:4020/api/client
Output:
[{"id":"1","title":"Test","status":"1","created":"2011-07-03 07:36:44","updated":"2011-07-03 07:36:44","brands":null},
{"id":"2","title":"Arla","status":"1","created":"2011-07-03 07:43:53","updated":"2011-07-03 07:43:53","brands":null}]
Anyone got any ideas?
Thanks
Mark
I have opened an issue similar to this one. It seems that the Thin web server can't handle gzipped content from an upstream server.
If you can disable gzip on the remote server and please upvote the issue on github so that it has better chances of getting fixed.
If You are using Apache at home check Your log files (while running the app):
tail -20f /var/log/apache2/error.log
You might get this error:
Client sent malformed Host header
This is because the Host header (more precise the port) is different than Apaches standard localhost:80.