Git clone from Gerrit is hanging on 'Cloning into' - gerrit

I got a ssh clone url from Gerrit website.
git clone "ssh://username#server:29418/code-repo"
But it's really slow and hang on for quite a while on 'Cloning into xxx'.
Thu Jul 22 09:53:47 CST 2021
Cloning into 'code'...
remote: Total 14828 (delta 0), reused 14828 (delta 0)
Receiving objects: 100% (14828/14828), 5.68 MiB | 0 bytes/s, done.
Resolving deltas: 100% (6532/6532), done.
Thu Jul 22 09:54:04 CST 2021
While on another Gerrit ,same repo is downloaded in one second.
Thu Jul 22 09:34:43 CST 2021
Cloning into 'code'...
remote: Counting objects: 14828, done
remote: Finding sources: 100% (14828/14828)
remote: Total 14828 (delta 6521), reused 14806 (delta 6521)
Receiving objects: 100% (14828/14828), 5.69 MiB | 0 bytes/s, done.
Resolving deltas: 100% (6521/6521), done.
Thu Jul 22 09:34:44 CST 2021
Why Gerrit is hanging on cloning into ... ?
How can I fix it?
Many thanks.

This is fixed by running GC.
I do not know why.

Related

Unable to access from local git to bitbucket, config problem

May be it is new TLS ?
fatal: unable to access 'https://user#bitbucket.org/team/ repo.git/': Received HTTP code 404 from proxy after CONNECT
I used this command.
git config --global --unset http.proxy
and it will done
$ git push > Counting objects: 17, done. Delta compression using up to 8 threads.
Compressing objects: 100% (14/14), done. Writing objects: 100%
(17/17), 9.83 KiB | 0 bytes/s, done. Total 17 (delta 8), reused 0
(delta 0) remote:

Can't push the iOS code on git

It was working before some days and now while pushing the code, I am getting following on terminal.
Counting objects: 218, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (218/218), done.
Writing objects: 100% (218/218), 485.32 KiB | 0 bytes/s, done.
Total 218 (delta 93), reused 0 (delta 0)
error: RPC failed; result=22, HTTP code = 401
fatal: The remote end hung up unexpectedly
fatal: The remote end hung up unexpectedly
Everything up-to-date
I am not getting this why, its happening. After some googling I tried to increase buffer size and check but still having same problem.
Can somebody help here?
Thanks in advance.
Increase the Git buffer size:
git config --global http.postBuffer 157286400

An error occurred executing 'gear postreceive' on openshift

I enabled Jenkins on my openshift app today. I logged in on jenkins url, but did not change any config settings. Later today I was trying to $ git push new version of my site. But I got following error:
$ git push origin
Counting objects: 17, done.
Delta compression using up to 8 threads.
Compressing objects: 100% (9/9), done.
Writing objects: 100% (9/9), 917 bytes | 0 bytes/s, done.
Total 9 (delta 7), reused 0 (delta 0)
remote: Executing Jenkins build.
remote:
remote: You can track your build at https://jenkins-namespace.rhcloud.com/job/app-build
remote:
remote: Waiting for build to schedule......
remote: **BUILD FAILED/CANCELLED**
remote: Please see the Jenkins log for more details via 'rhc tail'
remote: !!!!!!!!
remote: Deployment Halted!
remote: If the build failed before the deploy step, your previous
remote: build is still running. Otherwise, your application may be
remote: partially deployed or inaccessible.
remote: Fix the build and try again.
remote: !!!!!!!!
remote: An error occurred executing 'gear postreceive' (exit code: 1)
remote: Error message: CLIENT_ERROR: Failed to execute: 'control post-receive' for /var/lib/openshift/52UUID/jenkins-client
remote:
remote: For more details about the problem, try running the command again with the '--trace' option.
To ssh://52UUID#app-namespace.rhcloud.com/~/git/app.git/
6a9fe46..a551871 master -> master
so I ran $rhc tail jenkins:
Jul 10, 2015 2:43:45 PM hudson.plugins.openshift.OpenShiftCloud hasCapacity
INFO: No capacity remaining. Not provisioning...
Jul 10, 2015 2:43:45 PM hudson.plugins.openshift.OpenShiftCloud provisionSlave
INFO: Not provisioning new builder due to lack of capacity
Jul 10, 2015 2:43:45 PM hudson.plugins.openshift.OpenShiftCloud provision
INFO: Provisioned 0 new nodes
Jul 10, 2015 2:43:45 PM hudson.plugins.openshift.OpenShiftCloud cancelItem
INFO: Cancelling Item
Jul 10, 2015 2:43:45 PM hudson.plugins.openshift.OpenShiftCloud cancelItem
WARNING: Build app-build appbldr has been canceled
So I guess I have lack of capacity. So I check $ rhc show-app app --gears quota:
Gear Cartridges Used Limit
------------------------ --------------------------------------------------- ------ -----
52UUID python-2.7 postgresql-9.2 cron-1.4 jenkins-client-1 0.9 GB 1 GB
0.9GB out of 1GB. I already deleted logs and ran rhc tidy command. My latest db backup had 14MB (two days ago, have not tested if it's full though) and my app has 35MB in files, images, js libraries etc.
I do not want to deal with capacity problem here, rather how much does jenkins push need to be successful?
I am a student and I use openshift for a student project, so I would like to avoid upgrade at the moment. Thanks.
Well, No capacity remaining. means I have 3/3 gears, not disk usage. More info.

ConnectionFailure using mongo in rails 3.1

I have an app setup with Rails 3.1, Mongo 1.4.0, Mongoid 2.2.4.
What I am experiencing is this:
Mongo::ConnectionFailure: Failed to connect to a master node at localhost:27017
I've had this problem before, but it went away on a computer restart... this time it does not.
I don't understand, I didn't do anything. I just put my computer in sleep mode, went home and woke it up, then there it was.
Here is the output of sudo mongod
Fri Nov 25 21:47:14 [initandlisten] MongoDB starting : pid=1963 port=27017 dbpath=/data/db/ 64-bit host=xxx.local
Fri Nov 25 21:47:14 [initandlisten] db version v2.0.0, pdfile version 4.5
Fri Nov 25 21:47:14 [initandlisten] git version: 695c67dff0ffc361b8568a13366f027caa406222
Fri Nov 25 21:47:14 [initandlisten] build info: Darwin erh2.10gen.cc 9.6.0 Darwin Kernel Version 9.6.0: Mon Nov 24 17:37:00 PST 2008; root:xnu-1228.9.59~1/RELEASE_I386 i386 BOOST_LIB_VERSION=1_40
Fri Nov 25 21:47:14 [initandlisten] options: {}
Fri Nov 25 21:47:14 [initandlisten] journal dir=/data/db/journal
Fri Nov 25 21:47:14 [initandlisten] recover : no journal files present, no recovery needed
Fri Nov 25 21:47:15 [websvr] admin web console waiting for connections on port 28017
Fri Nov 25 21:47:15 [initandlisten] waiting for connections on port 27017
And I am able to connect with mongoin terminal.
After 2 hours of Googling I hope the competence of SOs community are able to figure this out.
Please, if you need more information about my app-setup just ask.
Thanks!
What you see is that the connection times out... that happens either after a long period of inactivity, or if you put your computer to sleep.
You can change / increase the timeout value, but this way you can't get rid of the connection timing out eventually.
Some MongoDB drivers allow to set :timeout => false , but Mongoid seems to still have problems with that
(see last 3 links in the list)
Hope this helps.
See also:
Mongodb server goes down, how to prevent Rails app from timing out?
MongoDB: What is connection pooling and timeout?
https://github.com/mongodb/mongo-ruby-driver
How can I query mongodb using mongoid/rails without timing out?
http://groups.google.com/group/mongoid/browse_thread/thread/b5c94e7047b42f8a
https://github.com/mongoid/mongoid/issues/455
Try to change localhost to 127.0.0.1!

Connection error using Rails 3.0 and Mongo 1.4.0

I created a library that would record events to MongoDB from my Rails application. I'm using version 1.4.0 of the mongo gem and Rails 3.0 w/Ruby 1.8.7. The relevant code is:
def new_event(collection, event)
#conn = Mongo::Connection.new("localhost", 27017, :pool_size => 5, :pool_timeout => 5)
#conn.db("event").collection(collection).insert(event)
#conn.close
end
This has worked fine for recording new events as they happen on the site. However, I also need to backfill the db with old events. So I'm doing running a script that basically does this:
SomeModel.find_each do |model|
Tracker.new.new_event("model_event", { ... info from model ... })
end
I'm trying to backfill something on the order of 50k events. As the script runs, I see this:
Tue Sep 27 23:45:20 [initandlisten] waiting for connections on port 27017
Tue Sep 27 23:46:20 [clientcursormon] mem (MB) res:12 virt:78 mapped:0
Tue Sep 27 23:48:49 [initandlisten] connection accepted from 127.0.0.1:51006 #1
Tue Sep 27 23:49:03 [conn1] remove event.application 103ms
Tue Sep 27 23:49:12 [conn1] remove event.listing 127ms
Tue Sep 27 23:49:20 [clientcursormon] mem (MB) res:37 virt:207 mapped:128
Tue Sep 27 23:51:44 [initandlisten] connection accepted from 127.0.0.1:48103 #2
Tue Sep 27 23:51:44 [conn2] end connection 127.0.0.1:48103
Tue Sep 27 23:51:44 [initandlisten] connection accepted from 127.0.0.1:48104 #3
Tue Sep 27 23:51:44 [conn3] end connection 127.0.0.1:48104
Tue Sep 27 23:51:44 [initandlisten] connection accepted from 127.0.0.1:48105 #4
Tue Sep 27 23:51:44 [conn4] end connection 127.0.0.1:48105
Tue Sep 27 23:51:44 [initandlisten] connection accepted from 127.0.0.1:48106 #5
Tue Sep 27 23:51:44 [conn5] end connection 127.0.0.1:48106
The ports (127.0.0.1:XXXXX) and (what I assume are) the Connection Pool #s keep incrementing, until eventually I get this exception from the ruby script:
Failed to connect to a master node at localhost:27017
/var/bundler/turtle/ruby/1.8/gems/mongo-1.4.0/lib/../lib/mongo/connection.rb:526:in `connect'
/var/bundler/turtle/ruby/1.8/gems/mongo-1.4.0/lib/../lib/mongo/connection.rb:688:in `setup'
/var/bundler/turtle/ruby/1.8/gems/mongo-1.4.0/lib/../lib/mongo/connection.rb:104:in `initialize'
Just found the solution. I needed to make the connection object a class variable so it was shared across all instances of the Tracker class.
##conn = Mongo::Connection.new("localhost", 27017, :pool_size => 5, :pool_timeout => 5)
def self.new_event(collection, event)
##conn.db("event").collection(collection).insert(event)
end

Resources