I built a photo gallery which uses Paperclip and validates the content-type using validates_attachment_content_type.
The application runs on a shared host with Passenger.
Is it possible to bypass the validation and run malicious scripts from the public/pictures directory? If so, is there anything that I can do to avoid evil scripts from running or from being uploaded?
Is it possible to bypass the validation and run malicious scripts from the public/pictures directory?
Yes. You can have a perfectly valid renderable image file that also contains HTML with script injection. Thanks for the bogus content-sniffing, IE, you have ruined everything.
See http://webblaze.cs.berkeley.edu/2009/content-sniffing/ for a summary.
If so, is there anything that I can do to avoid evil scripts from running or from being uploaded?
Not really. In theory you can check the first 256 bytes for HTML tags, but then you have to know the exact details of what browsers content-sniff for, and keeping that comprehensive and up-to-date is a non-starter.
If you are processing the images and re-saving them yourself that can protect you. Otherwise, do one or both of:
only serve user-uploaded files from a different hostname, so they don't have access to the cookie/auth details that would allow an injected script to XSS into your site. (but look out for non-XSS attacks like general JavaScript/plugin exploits)
serve user-uploaded files through a server-side script that includes the 'Content-Disposition: attachment' header, so browsers don't attempt to view the page inline. (but look out of old versions of Flash ignoring it for Flash files) This approach also means you don't have to store files on your server filesystem under the filename the user submits, which saves you some heavy and difficult-to-get-right filename validation work.
Related
Can someone show a complete example of application cache with html, css, js, appcache file including CACHE, NETWORK and FALLBACK section. Also updating the manifest. Where should the coding be written?
http://www.html5rocks.com/en/tutorials/appcache/beginner/#toc-updating-cache
As per updating cache from the above link, where should the coding been written?
The code for updating the manifest is written by your sever somewhere.
Either in PHP or Node.js you must write and serve this file with the correct mime type as specified in the link you posted.
You can auto generate this from the css and js files on your server. Don't include html files unless they are dynamic pages.
The first line in the file must be CACHE MANIFEST
Now it assumes your are putting things into the CACHE section, which is where you need to include all the paths to your css and js that you want the user to be able to use offline.
To create a NETWORK section, simply print out the word on it's own line.
Under this section you should include pages that should only be used online.
Under the FALLBACK section include a page to show if there is no offline version available.
This is a brief explanation but you should be able to easily find a tutorial that will help you auto generate this file.
For more details about the cache manifest itself:
http://diveintohtml5.info/offline.html
Offers the best explanation IMHO.
I want prevent users from uploading shell (exploit) on my host. I remember fckeditor, had few bugs that allowed a hacker uploads files on server. Is there a similar issue with ckeditor?
How trust to users files and make sure they aren’t fake files, for example: a hacker can edit inside a pdf files -> file have pdf extension and type but has malicious code.
Is using htmlencode,htmldecode enough for XSS attack?
CKEditor doesn't include any file upload, you have to add that part.
Again, CKEditor doesn't have that part. They sell CKFinder to fill that role and it has some checks to verify that the uploaded file is safe, but you must be very careful about which users do you allow to upload files to your server.
No. If you're using a WYSIWYG editor you are not going to htmlencode the provided data, and other basic tricks aren't also enough. You need a full check like HTMLPurifier
I'm writing a Rails application that serves files stored on a remote server to the end user.
In my case the files are stored on S3 but the user requests the file via the Rails-application (hiding the actual URL). If the file was on my servers local file-system, I could use the Apache header X-Sendfile to free up the Ruby process for other requests while Apache took over the task of sending the file to the client. But in my case - where the file is not on the local file-system, but on S3 - it seems that I'm forced to download it temporarily inside Rails before sending it to the client.
Isn't there a way for Apache to serve a "remote" file to the client that is not actually on the server it self. I don't mind if Apache has to download the file for this to work, as long as I don't have to tie up the Ruby process while it's going on.
Any suggestions?
Thomas, I have similar requirements/issues and I think I can answer your problem. First (and I'm not 100% sure you care for this part), hiding the S3 url is quite easy as Amazon allows you to point CNAMES to your bucket and use a custom URL instead of the amazon URL. To do that, you need to point your DNS to the correct amazon URL. When I set mine up it was similar to this: files.domain.com points to files.domain.com.s3.amazonaws.com. Then you need to create the bucket with the name of your custom URL (files.domain.com in this example). How to call that URL will be different depending on which gem you use, but a word of warning was that the attachment_fu plugin I was using was incorrectly sending me to files.domain.com/files.domain.com/name_of_file.... I couldn't find the setting to fix it, so a simple .sub method for the S3 portion of the plugin fixed it.
On to your other questions, to execute some rails code (like recording the hit in the db) before downloading you can simply do this:
def download
file = File.find(...
# code to record 'hit' to database
redirect_to 3Object.url_for(file.filename,
bucket,
:expires_in => 3.hours)
end
That code will still cause the file to be served by S3, but and still give you the ability to run some ruby. (Of course the above code won't work as is, you will need to point it to the correct file and bucket and my amazon keys are saved in a config file. The above is also using the syntax for the AWS::S3 gem - http://amazon.rubyforge.org/).
Second, the Content-Disposition: attachment issue is a bit more tricky. Hopefully, your situation is a bit more simple than mine and the following solution can work. Assuming the object 'file' (in this example) is the correct S3 object, you can set the disposition to attachment by
file.content_disposition = "attachment"
file.save
The above code can be executed after the file exists on the S3 server (unlike some other headers and permissions), which is nice and it can also be added when you upload the file (syntax depends on your plugin). I'm still trying to find a way to tell S3 to send it as an attachment and only when requested (not every time), and if you find that, please let me know your solution. I need to be able to sometimes download it and other times save embed images (for example) into HTML. I'm not using the above mentioned redirect but fortunately it seems that if you embed (such as a HTML image tag) a file with the content-disposition/attachment header, and the browser still displays the image normally (but I haven't throughly tested that across enough browsers to send it in the wild).
Hope that helps! Good luck.
I have an ASP.NET MVC website that works in tandem with a Windows Service that processes file uploads. For easy maintenance of the site, I'd like the log file for the Windows Service to be accessible (to me, only) via the website, so that I can hit http://myserver/logs/myservice to view the contents of the log file. How can I do that?
At a guess, I could either have the service write its log file in a "Logs" folder at the top level of the site, or I could leave it where it is and set up a virtual directory to point to it. Which of these is better - or is there another, better way?
Wherever the file is stored, I can see that there's going to be another problem. I tried out the first option (Logs folder in my website), but when I try to access the file via HTTP I get an error:
The process cannot access the file 'foo' because it is being used by another process.
Now, I know from experience that my service keeps the file locked for writing while it's running, but that I can still open the file in Notepad to view the current contents. (I'm surprised that IIS insists on write access, if that's what's happening).
How can I get around that? Do I really have to write a handler to read the file and serve it to the browser myself? Or can I fix this with configuration or somesuch?
PS. I'm using IIS7 if that helps.
Unfortunately I'm afraid you'll have to write a handler that will open the file, and return it to the client.
I've written an IIS Manager extension that displays server log files, and what I've noticed that even the simple
System.IO.File.OpenRead("")
can still run in the same problem, and return the same error.. It was kind of confusing.
In the end I used
System.IO.File.Open("", FileMode.Open, FileAccess.Read, FileShare.ReadWrite)
and I could easily open the file while the server was writing logs to it :)
I think the virtual directory is an "okay" solution, if you add the directory (application) with READ ONLY rights + perhaps "BROWSE directory" too (so you can see the folder contents rendered by the IIS).
(But once you do that, you should consider that you also anonymous access to that folder - unless you enable authentication, so watch out for "secret" contents of the logfiles that you might expose? just a thought.)
Another approach, I prefer myself, is to make a MVC/ASP.NET page that does the lookup in the folder by normal code, so that you 100% can filter whatever data is shown in the HTML.
You can open the files as TextStream's and in Read Only mode.
If it's a problem to gain access to the logfolder, I would use the virtual directory with READ ONLY access and then program something that renders the logfiles as HTML on my screen and with my detail levels. Perhaps even add some sort of "login" first. But it all depends on your security levels and contents of logfiles.
is this meaningfull to you? if not, please explain more, as I've been through this thought a few times already for similar situations.
Does anyone know why there are question marks (with a number) after the images and css files (when looking at the html code)? And how can I turn them off?
From Rails API documentation:
By default, Rails will append all
asset paths with that asset‘s
timestamp. This allows you to set a
cache-expiration date for the asset
far into the future, but still be able
to instantly invalidate it by simply
updating the file (and hence updating
the timestamp, which then updates the
URL as the timestamp is part of that,
which in turn busts the cache).
Hope it helps.
It is to be able to cache the file on the client and still making sure the client receive the newest version when there is a change. So each file modification results in a new timestamp which the client will do a new request to the server to receive the modified file.
If you do not want to use (though I cannot see why - it is a good thing) simple do not use the rails helpers for including javascripts or stylesheets. Just include the normal HTML tags: link and script.