So, whenever I upload a file in dev mode in Rails with Carrierwave, I get these temporary RackMultipart* files right in the Rails root. Even though in config/carrierwave.rb I have the following setting:
CarrierWave.configure do |config|
config.cache_dir = 'tmp/uploads'
end
And no, I didn't change cache dir in the uploader. Worst of all, it seems like for every new file upload, it creates 2 identical (in content, but not in name) RackMultipart* files. Any idea how to fix this?
This is a problem with the sticky bit.
You must do :
chmod o+t /tmp
Then in rails console check the path with :
Dir::tmpdir
Related
Im using mina to deploy my rails 4 app. When ever I mina deploy it clears out all the images that have been uploaded to my app. How do I stop this from happening? Thanks.
You need to ensure your uploaded assets end up in the shared directory, similar to the way your database.yml is done.
For example, our assets are all stored in public/system, so we have a line that looks like this:
set :shared_paths, %w[
files
log
private
public/system
tmp
]
Then, when you run invoke :'deploy:link_shared_paths' those directories will be linked to the root of your current directory, if they exist in the shared directory (you can create/populate them if they do not).
I am trying to deploy rails on jRuby using a .war file with the help of Warbler (Tomcat) and/or Torquebox 4 (Wildfly). The problem I face is that I don't know how to handle uploads with Carrierwave or Paperclip in this case.
Ideally uploads should be stored outside the war, as it may be replaced with a newer version of the app anytime.
I tried to create a symlink (uploads) in the public directory before I packaged the app as a war file to /home/username/uploads (permissions are set to 777) directory but that doesn't work (I get a 500 error).
Also how can I access the production.log after I deployed the war file? Or where should I place the logs?
UPDATE
I figured out how to config Carrierwave to store uploads outside the war file:
if Rails.env.development?
CarrierWave.configure do |config|
config.root = "/Users/Username/username_uploads/uploads"
end
elsif Rails.env.production?
CarrierWave.configure do |config|
config.root = "/home/username/username_uploads/uploads"
end
end
Now Carrierwave uploads the files without a problem, but I get a 404 error when I try to view them.
I tried to include a symlink inside the war file to the uploads folder but no success. I tried to create it before running warble war, and also after the app was deployed to Tomcat ( inside the app_name folder ).
Any idea how to solve this?
UPDATE 2
I found a working solution here:
Configure Symlinks for single directory in Tomcat
In short:
cd into the exploded war directory ( you can find this under tomcat/webapps ) that tomcat created ( if the name of the uploaded war file is yourapp.war then the directory name will be yourapp in Tomcat 8 ).
Create an uploads folder with sudo mkdir uploads
Create a mount point: sudo mount --bind /path/to/actual/upload/directory/uploads uploads
I haven't yet tested this with Wildfly, but I will later today or tomorrow. If I remember correctly it won't automatically explode war files by default.
I would still like to know additional, simpler, different solutions for the problem though, and also opinions about the solution I found.
Just food for thought on a different approach...
Using a "cloud based" storage service would make the upload and serving of the assets problem go away, it would also make it simpler to scale the app should you ever need a second node,
it would also reduce the need to scale the app because you would effectively delegate the large data operations which ruby traditionally handles badly to a different service
Amazon S3 is an obvious choice, but check out Riak too
I think the problem you are experiencing with this solution:
if Rails.env.development?
CarrierWave.configure do |config|
config.root = "/Users/Username/username_uploads/uploads"
end
elsif Rails.env.production?
CarrierWave.configure do |config|
config.root = "/home/username/username_uploads/uploads"
end
end
is that you are storing the images outside of the rails public folder, and therefore rails cannot serve them anymore. Carrierwave is uploading everything properly as you would expect it to do, and it creates links, relative to the storage dir.
If you would use the default #{Rails.root}/public storage, rails would serve your images, as any content from public is served (also static html or other assets). As Rails doesn't serve these anymore it's up to you to serve them.
Maybe you can just directly serve them through Tomcat (as I have no expertise in Tomcat configuration you have to figure this out yourself). This would maybe even be faster, as the request surpass the rails stack.
By default, Tomcat does not follow symbolic links (due security reasons). To enable this, you have to add the following inside the Host tag in your server.xml:
<!-- Tomcat 7: -->
<Context allowLinking="true" />
<!-- Tomcat 8: -->
<Context>
<Resources allowLinking="true" />
</Context>
See the docs and the latest migration guide.
If you could provide your logs it would be much easier to diagnose the problem (I take it this problem is only in production because you are asking about how to access that log?). Goto the rails app directory and look in log/production.log these log levels are often lower so you may have to configure them to be more informative config/enviroments/prouction.rb should have a config.log.level this is probably set to info set it to debug for more verbose logging.
Whenever I run bundle install in my rails app, an empty folder is created in the app's root directory, which looks like this:
bundler20140929-26466-x86kd5
bundler20140929-30110-rbes8h
bundler20141010-54137-1g4prq1
UPDATE:
All of my other apps have to same problem, so it must be environment related. Also, I see RackMultipart upload files from carrierwave. Thinking this was related to a TMP variable not being set I tried adding this to development.rb:
ENV['TMPDIR'] = Rails.root.join('tmp')
ENV['TMP'] = Rails.root.join('tmp')
ENV['TEMP'] = Rails.root.join('tmp')
But the problem persists..
Any ideas about why might this be happening?
I'm having some problems with assets on Heroku (rails) and am hoping someone can point me in the right direction. I've got the asset_sync gem installed, and after many hours of debugging I've finally got it working. However, when I first run (with an empty S3 bucket) "git push heroku master", I get about 4 copies of every file uploaded to s3 (each with a different hash appended). Also, somehow a lot of files I previously deleted (and are no longer in my app/assets/images directory) are still somehow getting uploaded. I've deleted the public/assets folder on my local copy & pushed to git, but perhaps that folder is still there on heroku? How do I debug this? I want my assets to be properly sync'd, so if I delete an image while developing locally, it will also be removed from s3 when I next deploy.
Another possibly related problem, my static error pages (public/404.html) are not getting served on heroku, yet work fine on development- are these static html files treated as assets and meant to be uploaded to S3 too?
Running heroku run rake assets:precompile does nothing. My asset_sync.rb initializer is:
if defined?(AssetSync)
AssetSync.configure do |config|
config.fog_provider = 'AWS'
config.aws_access_key_id = 'key'
config.aws_secret_access_key = 'key'
config.fog_directory = 'bucketname'
config.fog_region = 'us-east-1'
config.existing_remote_files = "delete"
end
end
I know I should be using environment variables but it shoudln't make any difference hardcoding my access details at least while I'm testing
Thanks for any help.
I'm generating a sitemap for my website and temporarily saving it to the tmp folder to be then uploaded to my Amazon AWS account. I'm using the sitemap generator and fog gems to help me. So far I have this...
# In sitemap.rb
# Set the host name for URL creation
SitemapGenerator::Sitemap.default_host = "http://mycoolapp.com/"
# pick a place safe to write the files
#SitemapGenerator::Sitemap.public_path = 'tmp/'
# store on S3 using Fog
SitemapGenerator::Sitemap.adapter = SitemapGenerator::S3Adapter.new
# inform the map cross-linking where to find the other maps
SitemapGenerator::Sitemap.sitemaps_host = "http://#{ENV['FOG_DIRECTORY']}.s3.amazonaws.com/"
# pick a namespace within your bucket to organize your maps
SitemapGenerator::Sitemap.sitemaps_path = '/'
SitemapGenerator::Sitemap.create do
# Put links creation logic here.
#
add '/home'
add '/about'
add '/contact'
end
Whenever I run heroku run rake sitemap:create I receive the following error...
In '/app/tmp/':
511
rake aborted!
Read-only file system - /sitemap.xml.gz
I'm really at a loss as to why it's not working. I even went as far as making sure the tmp folder is created by running Rails.root.join('tmp') as an initializer. Any help in solving this would be greatly appreciated.
Don't Write to Root of Filesystem
Rake is quite clear about the source of the error:
rake aborted!
Read-only file system - /sitemap.xml.gz
This is telling you that your rake task is attempting to write the file to the root of the filesystem.
If you aren't using a a stack with ephemeral filesystem support like Celadon Cedar, you need to ensure you're writing to #{RAILS_ROOT}/tmp rather than the root of the filesystem. I haven't tested this myself, but you may be able to fixing this issue simply by pointing sitemaps_path to a writable directory. For example:
SitemapGenerator::Sitemap.sitemaps_path = '/app/tmp'
If that doesn't work, you'll have to track down where your gem or rake task is defining the root directory as the location for writing the sitemap.xml.gz file.
So, I encountered the same problem trying to follow the fog instructions on both Heroku and for the sitemap_generator gem. I eventually switched to carrierwave and found it to be plug and play.
Check out the documentation here and let me know if you have questions as I just did this a couple months ago and likely have encountered any hurdles you may run in to.