Failed to understand heroku slugsize - ruby-on-rails

I recently pushed a new and empty app with gemfile added up to heroku and it was added successfully. The folder size locally is 488kb but on heroku its slugsize is 6mb of 100mb. I did this after trying to push my real application that kept showing this error: fatal: sha1 file '' write error invalid argument. The size of this app locally is 3mb. Could this really be the problem why it isn't bing pushed. How on earth do i reduce this size even after adding .gitignore and .slugignore files. Thanks

Details on the slug size can be found here:
https://devcenter.heroku.com/articles/slug-compiler
The key part you might be interested in is:
You can roughly estimate slug size locally by doing a fresh checkout
of your app, deleting the .git directory, and running du -hsc.
$ du -hsc | grep total
You get 200mb maximum though, so I wouldn't worry about it.

Related

build errors when compiling in ac-docker win10 asking to rebuild compiled header since a file has been modified since the precompiled header

here is a copy of a few erros but the are to many to list in the post maybe 50 total all same error just different file names
fatal error: file '/azerothcore/src/server/game/Entities/GameObject/GameObject.h' has been modified since the precompiled header '/azerothcore/var/build/obj/src/server/game/CMakeFiles/game.dir/cmake_pch.hxx.pch' was built: size changed
note: please rebuild precompiled header '/azerothcore/var/build/obj/src/server/game/CMakeFiles/game.dir/cmake_pch.hxx.pch'
1 error generated.
make[2]: *** [src/server/game/CMakeFiles/game.dir/build.make:154: src/server/game/CMakeFiles/game.dir/AI/CoreAI/GuardAI.cpp.o] Error 1
make[2]: *** Waiting for unfinished jobs....
how can i do this step and rebuild my precompiled headers in docker build of ac? also my world database keeps bloating up in size huge amounts went from 8gb to 16gb to 28 gb since most recent new build yesterday files from 10/11/2021
Probably this is because of the ccache, which is the cache that allows you to have super-fast compilation, but sometime the header cache fails because of a high amount of changes, and it must be cleaned.
So, Please try this:
./acore.sh docker dev:dash compiler ccacheClean
it should clean the ccache, then you can restart the compilation
PS The problem about the database size is related to the configuration of MySQL. InnoDB usually stores a large amount of Binglogs that should be cleaned. But please open a separated question for that. AC provides the default MySQL configuration and the system admin should tune it based on its needs
the only way i was able to fix my problems was at a personal cost to hours spent with creating so much custom content, only to be lost. i rebuilt a clean server and then only imported my AUTH and CHARACTER databases, problems solved. the build is 100% working as intended, but will have to slowly and painfully redo all my custom work that was done in the WORLD database.

Grails 3 - Gradle: Binary file gets corrupted during build on Heroku

I am trying to use the Google Rest API from a Heroku instance. I am having problems with my certificate file, but everything works as expected locally.
The certificate is a PKCS 12 certificate, and the exception I get is:
java.io.IOException: DerInputStream.getLength(): lengthTag=111, too
big.
I finally found the source of this problem. Somewhere along the way the certificate file is modified, locally it is 1732 bytes but on the Heroku instance it is 3024 bytes. But I have no idea when this occurs. I build with the same command locally (./gradlew stage) and execute the resulting jar with the same command.
The file is stored in grails-app/conf, I don't know any better place to put it. I am reading it using this.getClass().getClassLoader().getResourceAsStream(...)
I found similar problems can occur when using Maven with resource filtering. But I haven't found any signs of Grails or Gradle doing the same kind of resource filtering.
Does anyone have any clues about what this can be?

Git clone error in Xcode

Up until now I've been using Xcode with Subversion for my code repositories with no problem. Now I'm working on a project that uses a Git repository stored at GitHub, so I figured I'd go clone that repository to my local machine and get started.
In Xcode, I add the repository then tell it to Clone -- The machine chews on this for a while, and if I use the Finder I can see the files being placed in the target directory (which is a newly-created, empty directory on my system). After a while though, I get an error message:
fatal: destination path '/Users/myname/Documents/ProjectName' already exists and is not an empty directory.
I have tried this three times now, each time starting with an empty target directory, and it gives the same error message each time, so I know it has to be something I am doing wrong, or have not set up properly.
Thinking that perhaps something was going wrong and the system was trying to do a second clone operation (to a now non-empty directory) I tried canceling and trying a build, but some files are missing from the project -- so not all of it made it down to my system.
My searches on this issue turn up several hits for people doing the clone via command line and showing this error message, but not through the Xcode interface.
Does anyone have any suggestions about what might be going wrong?

Error deploying to heroku

Please help. I have absoluty no idea what's wrong. The rails app works on my local machine.
If I do this:
git push heroku master
I get this:
Counting objects: 4195, done.
Delta compression using up to 2 threads.
Compressing objects: 100% (3944/3944), done.
Writing objects: 72% (3009/4178), 9.99 MiB | 73 KiB/s
Compressing objects: 100% (3944/3944), done.
**Connection to 10.46.xxx.xxx closed by remote host.KiB/s
error: pack-objects died of signal 13
error: failed to push some refs to 'git#heroku.com:gentle-rain-xxxx.git**
I just dealt with 24 hours of this hell. I re-cloned repos, destroyed apps, repacked, pruned... the whole 9 yards.
It turned out that I had a .txt file which was ~250MB in size that, even though I had removed it from my master branch, was still present in my local (as well as github) cache.
I checked out this page and inadvertently found my answer here:
https://help.github.com/articles/remove-sensitive-data
The .txt file had previously been in the doc/ folder, so I pointed this command at where the file would have been in any commits and ran it.
git filter-branch --index-filter 'git rm --cached --ignore-unmatch doc/US.txt'
This is very useful if you realize you have static assets of some sort that don't have to be in your repo and are causing you to get the signal 13 error.
I was having problems with a repository as small as 130MB. I don't really want to prune my repository, nor do I feel it is necessary.
I can't help but feel this is a problem with git and/or Heroku, I believe a big push should succeed, even over a "slow" or less than ideal connection.
How I solved/worked-around this issue was to spin up an EC2 instance, checkout my repo there, and push to github. In that way, my deploy speed was 4MiB/s (faster than my own 80KiB/s!). Furthermore, in the cases where the push would fail due to some configuration issues, I could quickly tweak and try again.
For more information on this technique, I've written up the full steps on how to spin up an EC2 instance for this purpose here: http://omegadelta.net/2013/06/16/pushing-large-repositories-to-heroku/
Hi I had the same problem trying push to cedar stack. I contacted heroku support and they fixed it. Here is what they said:
It appears to be due to a change in our git server on our end. I'll be
following up with our engineers to make sure we get a permanent fix
rolled out for this.
-Chris
This appears to just be a timeout from your push being too large.
I got around this by doing a git reset to a SHA that was around 500 commits back, pushing that, and then pushing the rest of my repo.

Git push fails to github: failed to read object

The story:
I've been developing a RoR-app in both my desktop and laptop. It was quite handy to commit changes made on another, push them to github and fetch & merge on other.
The starting point is this: I committed latest changes on my desktop, pushed them to github and then fetched and merged them into my laptop. Then, I made some commits on laptop and pushed to github. Took the changes, merged to my desktop (with --no-ff). THEN, happened the probable source of all mischiefs: I reverted the desktop to commit where it was before the latest fetch & merge. Made some development work with it, committed, pushed to github. In the laptop, I did the revert as well, though I reverted it to a commit which was made somewhere between the latest fetch from github, fetched again and merged those. Some error messages came after reverting desktop and laptop both, but things worked still fairly well and I kept working on both machines.
Until now. I tried to push from my laptop to github, which gives the following output:
Counting objects: 106, done.
error: unable to find 5a2a4ac...
error: unable to find bc36923...
error: unable to find ecb0d86...
error: unable to find f76d194...
error: unable to find f899df7...
Compressing objects: 100% (64/64), done.
fatal: failed to read object 5a2a4ac... : Invalid argument
error: failed to push some refs to 'git#github:username/repo.git'
So, the question is, what exactly took place here?
EDIT: It seems that because of suspending my laptop and moving it from place to place in that state screwed up the hard drive somehow. The fsck output is unavailable because we worked around the problem and kept on working, but IIRC some branches and commits were dangling, including that commit which git failed to read. - Teemu
I have run into these kinds of issues.
Rather than spending hours trying to resolve and fix these issues, my 'solution' is usually to take the code I want, copy it into a new directory, delete the .git files and then create a new github for it and then connect the two as usual.
Although this may not be a specific answer to the details you raise, I find that there can be a number of ways that git/github issues can happen and rather than wishing I was a 'git expert' now (it's happening but it takes time), I do the above and continue with my actual application development.
The problem you have is that you are trying to read objects that are not part of your 'tree'. They exist but they have been orphaned. However, git allows you to merge one project to another so this is one way you can keep your commits without starting again, something like the following:
git remote add -f somename git://somegitplace.com/user/some.git
git merge -s ours --no-commit somename/master
git read-tree --prefix=ext/somename -u somename/master
git commit -m 'external merge'
git pull -s subtree somename master
Hope that helps. Let me know if not and we can attack it again

Resources