Fail to upload file size 0 with chunking option enabled - upload

I can't upload file with size 0. I have enabled chunking option. Is there a limit of what the file size has to be? Please see below error for more info.
Uncaught TypeError: Cannot read property 'shift' of undefinedchunked.nextPart # s3.fineuploader-5.0.3-2.js:4056chunked.sendNext # s3.fineuploader-5.0.3-2.js:4076upload.now # s3.fineuploader-5.0.3-2.js:4493upload.maybeSendDeferredFiles # s3.fineuploader-5.0.3-2.js:4446upload.maybeDefer # s3.fineuploader-5.0.3-2.js:4420upload.start # s3.fineuploader-5.0.3-2.js:4501qq.extend.upload # s3.fineuploader-5.0.3-2.js:4523qq.basePrivateApi._uploadFile # s3.fineuploader-5.0.3-2.js:3108qq.basePublicApi.uploadStoredFiles # s3.fineuploader-5.0.3-2.js:1822
Thanks.

You're saying you want to upload an empty file? Why would you want to do that? In fact, this is usually prohibited and caught by the validation logic by Fine Uploader, although this validation was briefly broken but fixed in version 5.2.0. I'm guessing you have an older version.
There is an open feature request to allow empty files to pass through, but that is fairly low priority and is not scheduled for any near future release.

Related

Reupload images on Amazon S3 carrierwave

I have images uploaded to amazon s3 bucket. When I tried to recreate_versions!, It gives me a nil body exception.
I think this is due to changes in previous uploader settings in our code. However, when I do pr.image.url, it still gives me the original image, so what I tried is below:
begin
User.all.each do |pr|
if pr.user.present?
pr.remote_avatar_url = pr.avatar.url
pr.save!
end
end
rescue
end
But it throws an error:
ActiveRecord::RecordInvalid: Validation failed: Avatar trying to
download a file which is not served over HTTP
Which I know is carrierwave exception. What I'm trying to do is, I want to reupload all the images (because pr.avatar.url give me the original image), but I don't know how to do it. Any help will be greatly appreciated.
You are correct in attempting to store the remote URL in an attribute called remote_avatar_url.
CarrierWave throws the Validation failed: ATTRIBUTE trying to download a file which is not served over HTTP exception when attempting to save an invalid URL to the model. More specifically, CarrierWave::Uploader::Download raises a CarrierWave::DownloadError when the downloaded file "scheme" attribute does not match the regex /^https/ (meaning the URL does not start with "https"). You can view this logic here. (In particular, see lines 31 and 69.)
I'm not sure if this is the problem, but you might try checking the pr.avatar.url to see whether it begins with the https prefix before assigning it to the remote_avatar_url.
I hope this was at least somewhat helpful.
To re-upload the image, you need to download the image, if your carrierwave attr is remote_avatar, then maybe you can do something like:
begin
User.all.each do |pr|
if pr.user.present?
pr.remote_avatar = File.open(pr.avatar.url, 'rb')
pr.save!
end
end
rescue
end

ruby on rails log file to big -> remove params from it

i made a distribuited real-time system in RoR, it's compose by 2 machine.
PC A:
take images from a Camera and send these to the second PC. So this machine send every second an http request with the image in the params.
PC B - the server:
save the image in a database.
My problem is that the log file become too big because log even the params string.
How can i set the logger to truncate the params? or simply remove it?
sorry for my bad english..... i hope that someone can help me.
Bye
Davide Lentini.
To specifically remove certain params from the logs you can set the config.filter_parameters in application.rb like this:
config.filter_parameters += [:parameter_name]
This will replace the value of the filtered parameter with "[FILTERED]".
You can set the log level to be less verbose.
See the rails guide on debugging.
So for your entire application (in development), add this to config/environments/development.rb:
config.log_level = :warn # In any environment initializer, or
Or, to change the logging level directly in your application:
Rails.logger.level = 0 # at any time

"Unrecognized type" error in haml view, but not console

I have a small library, say widget_utils.rb, that lives in the lib/ directory of the app. (I've set the config to autoload source files from lib/)
The utils include the 'spira' gem which does ORM mapping based on RDF.rb. In the widget_utils.rb file are class objects based on my RDF repository, and they refer to a type called Spira::Types::Native.
I have a static method in WidgetUtils that returns a hash based on RDF data for use in rendering, WidgetUtils.options_for_select.
If I launch the console, I can call WidgetUtils.options_for_select and get back my hash perfectly.
But if I run the server, and try to render /widget/1234 or /widget/1234/edit to show one widget, I get the error Unrecognized type: Spira::Types::Native
At the bottom of my stack trace is widget_controller.rb, and at some point the haml file is doing a "load" of "lib/widget_utils.rb", and crashing with the Unrecognized type at the point where it's referenced in the util source file.
From the console if I do "load 'lib/widget_utils.rb'" I get no error, the type is recognized successfully.
I'm stumped, and too new to rails to successfully come up with a strategy to solve this problem outside of trial and error.
As it turns out, this problem is specific to the Spira library I'm working with, and JRuby when serving pages.
Spira keeps its collected known RDF types in a "settings" hash that it makes thread local. In most ordinary circumstances on MRI Ruby/Rails this isn't an issue, since the requests are usually handled in the same thread as the Spira initialization.
Similar problems will occur under JRuby if you attempt to make data global through, for instance, class variables. Some other mechanism needs to be found to make global reference data available to all service threads. (Mutable shared data is not even something to consider unless you like headaches.)
There is a workaround for Spira that will keep its settings available. In my utility class I added this monkey patch to take the settings out of Thread.local:
module Spira
def settings
#settings ||= {}
end
module_function :settings
end

Rails 3 - File name too long error

We have an online store running on Rails 3 Spree platform. Recently customers started reporting weird errors during checkout and after analyzing production logs I found the following error:
Errno::ENAMETOOLONG (File name too long - /var/www/store/tmp/cache/UPS-R43362140-US-NJ-FlorhamPark07932-1025786194_1%7C1025786087_1%7C1025786089_15%7C1025786146_4%7C1025786147_3%7C1025786098_3%7C1025786099_4%7C1025786100_2%7C1025786114_1%7C1025786120_1%7C1025786121_1%7C1025786181_1%7C1025786182_1%7C1025786208_120110412-2105-1e14pq5.lock)
I'm not sure why this file name is so long and if this error is specific to Rails or Spree. Also I'm not very familiar with Rails caching system. I would appreciate any help on how I can resolve this problem.
I'm guessing you are using spree_active_shipping, as that looks like a cache id for a UPS shipping quote. This will happen when someone creates an order that has a lot of line items in it. With enough line items this will of course create a very large filename for the cache, thus giving you the error.
One option would be to use memcache or redis for your Rails.cache instead of using the filesystem cache. Another would be to modify the algorithm that generates the cache_key within app/models/active_shipping.rb in the spree_active_shipping gem.
The latter option would probably be best, and you could simply have the generated cache key run through a hash like MD5 or SHA1. This way you'll get predictable cache key lengths.
Really this should be fixed within spree_active_shipping though, it shouldn't be generating unpredictably long cache keys, even if you a key-value store is used, that's wasted memory.
It is more related to your file system. Either set up a file system which supports longer file names or change the software to make better (md5?timestamp?unique id?) file names.
May be this help:
config.assets.digest and config.assets.debug can't both be true
It's a bug : https://github.com/rails/jquery-rails/issues/33
I am using rails 3.2.x and having same issue. I end up generating MD5 digest in the view helper method used to generate cache key.
FILENAME_MAX_SIZE = 200
def cache_key(prefix, params)
params = Array.wrap(params) if params.instance_of?(String)
key = "#{prefix}/" << params.entries.sort { |a,b| a[0].to_s <=> b[0].to_s }.map { |k,v| "#{k}:#{v}"}.join(',').to_s
if URI.encode_www_form_component(key).size > FILENAME_MAX_SIZE
key = Digest::MD5.hexdigest(key)
end
key
end
Here I have to check length of URI encoded key value using URI.encode_www_form_component(key).size because as you can see in my case, cache key is generated using : and , separators. Rails encodes the key before caching the results.
I took reference from the pull request.
Are you using paperclip gem? If yes, this issue is solved: https://github.com/thoughtbot/paperclip/issues/1246.
Please update your paperclip gem to the latest version.

How to fix / debug 'expected x.rb to define X.rb' in Rails

I have seen this problem arise in many different circumstances and would like to get the best practices for fixing / debugging it on StackOverflow.
To use a real world example this occurred to me this morning:
expected announcement.rb to define Announcement
The class worked fine in development, testing and from a production console, but failed from in a production Mongrel. Here's the class:
class Announcement < ActiveRecord::Base
has_attachment :content_type => 'audio/mp3', :storage => :s3
end
The issue I would like addressed in the answers is not so much solving this specific problem, but how to properly debug to get Rails to give you a meaningful error as expected x.rb to define X.rb' is often a red herring...
Edit (3 great responses so far, each w/ a partial solution)
Debugging:
From Joe Van Dyk: Try accessing the model via a console on the environment / instance that is causing the error (in the case above: script/console production then type in 'Announcement'.
From Otto: Try setting a minimal plugin set via an initializer, eg: config.plugins = [ :exception_notification, :ssl_requirement, :all ] then re-enable one at a time.
Specific causes:
From Ian Terrell: if you're using attachment_fu make sure you have the correct image processor installed. attachment_fu will require it even if you aren't attaching an image.
From Otto: make sure you didn't name a model that conflicts with a built-in Rails class, eg: Request.
From Josh Lewis: make sure you don't have duplicated class or module names somewhere in your application (or Gem list).
That is a tricky one.
What generally works for me is to run "script/console production" on the production server, and type in:
Announcement
That will usually give you a better error message. But you said you already tried that?
I just ran into this error as well.
The short of it was that my rb file in my lib folder was not in a folder structure to match my module naming convention. This caused the ActiveSupport auto loader to use the wrong module to see if my class constant was defined.
Specifically I had defined the following class
module Foo
class Bar
end
end
In the root of /lib/bar.rb
This caused the autoloader to ask module Object if Bar was defined instead of module Foo.
Moving my rb file to /lib/foo/bar.rb fixed this problem.
I've encountered this before, and the AttachmentFu plugin was to blame. I believe in my case it was due to AttachmentFu expecting a different image processor than what was available, or non-supported versions were also installed. The problem was solved when I explicitly added :with => :rmagick (or similar -- I was using RMagick) to the has_attachment method call even for non-image attachments. Obviously, make sure that your production environment has all the right gems (or freeze them into your application) and supporting software (ImageMagick) installed. YMMV.
As for not getting Rails and AttachmentFu to suck up and hide the real error -- we fixed it before figuring it out completely.
Since this is still the top Google result, I thought I'd share what fixed the problem for me:
I had a module in the lib folder with the exact same name as my application. So, I had a conflict in module names, but I also had a conflict of folder names (not sure if the latter actually makes a difference though).
So, for the OP, make sure you don't have duplicated class or module names somewhere in your application (or Gem list).
For me, the cause was a circular dependency in my class definitions, and the problem only showed up using autotest in Rails. In my case, I didn't need the circular dependency, so I simply removed it.
You can try disabling all your plugins and add them back in one by one.
In environment.rb in the Initalizer section, add a line like this one:
config.plugins = [ :exception_notification, :ssl_requirement, :all ]
Start with the minimum set to run your application and add them in one by one. I usually get this error when I've defined a model that happens to map to an existing filename. For example, a Request model but Rails already has a request.rb that gets loaded first.
I had this problem for a while and in my case the error was always preceded from this S3 error:
(AWS::S3::Operation Aborted) "A
conflicting conditional operation is
currently in progress against this
resource. Please try again."
This problem usually occurs when creating the same bucket over and over again. (Source AWS Developers forum)
This was due to the fact that I had used attachment_fu to create the bucket and I had decommented the line containing the command Bucket.create(##bucket_name) in lib/technoweenie/attachment_fu/backends/s3_backends.rb (near to line 152).
Once commented or deleted the command Bucket.create(##bucket_name) the problem disappeared.
I hope this helps.
Changing class names while using STI caused this for me:
Class changed from 'EDBeneficiary' to 'EdBeneficiary'
Existing records had 'EDBeneficiary' stored in the 'type' column, so when Rails tried to load them up the exception was raised.
Fix: Run a migration to update values in the 'type' column to match the new class name.
in my case, I am getting this error in the development console but I can load the class in irb
Sorry this isn't a definitive answer, but another approach that might work in some specific circumstance:
I just ran in to this problem while debugging a site using Ruby 1.8.7 and Merb 1.0.15. It seemed that the class in question (let's call it SomeClass) was falling out of scope, but when some_class.rb file was automatically loaded, the other files it required (some_class/base.rb etc) were not loaded by the require mechanism. Possibly a bug in require?
If I required some_class file earlier, such as the end of environment.rb, it seems to prevent the object falling out of scope.
I was getting this error duo to a controller definition being in a file that wasn't named as a controller. For instance, you have a Comment model and you define the controller in a comment.rb file instead of comments_controller.rb
I had this problem with rails version 1.2.3. I could reproduce the problem only with mongrel, using console environment access didn't give any useful info. In my case, I solved making the RAILS_ROOT/html folder writable by mongrel and then restarting the web server, as some users reported here:
http://www.ruby-forum.com/topic/77708
When I upgraded rails from 1.1.6 to 1.2.6 and 2.0.5 for my app, I faced this error. In short, old plugins caused this error. These plugins were already out-dated and no update anymore (even no repo!). After I removed them, the app worked on 1.2.6 and 2.0.5. But I didn't check the detail source code of the plugins.

Resources