Download entire folder from server using AFNetworking - ios

I have an application that downloads a dozen HTML files one by one to the cache directory. But I now need to download all contents on a folder from my server to the cache directory, they will all be images.
I don't know if it is a different approach, I can probably accomplish it by specifying each filename but I don't want to have to do that.
I am using AFNetworking, any direction would be great!

Related

How can i set "FILES_ROOT" to a folder on server?

I want to use RoxyFileMan to manage uploaded images and files. But I should save them on the server. As you know, RoxyFileMan Upload uploads files and images in a folder named Uploads in a fileman directory. We can change FILES_ROOT to another local path to change the directory files get uploaded to.
But, I want to upload files on the server and then read them from the server after they've been uploaded so that they can be edited in ckeditor.
Can anyone please provide advice/guidance on how to achive this outcome?
It's not very clear how your question is meant, but if I understand you correctly, you are using RoxyFileMan on a local server and want to upload files to a remote online server, right?
Roxy uploads files to a directory on the server it is run from. Which means if run from localhost, it will use directory on your localhost. If run from a server, it will use a directory on server.
As far as my knowledge goes, you cannot upload from a localhost to an online server directly.
You could maybe achieve that using some custom script to open an FTP connection, but then you would have to also remake all the code to also load images from there... which would be rather silly.

Automatically copy new folder contents to another folder after upload

Is there any way a new image upload from a PHP form sent to directory a can be copied to directory b after it has been uploaded? In this case it's not possible to alter the upload path itself or copy it during upload so I'm looking for some kind of automatic replication of new directory contents into another directory after the file has been uploaded.
Is there an automated service/script that can move the content of one directory on our server to another directory? We upload files to www.mysite.com/upload/thumb for example but need them to be moved automatically to www.mysite.com/cs/upload/thumb - is this possible without running a move_uploaded_file PHP script (I would prefer it to be done by the server because we use the same page for many different landing page functions).
do you look for copy (http://php.net/manual/en/function.copy.php)?

where is the best place to save images from users upload

I have a website that shows galleries. Users can upload their own content from the web (by entering a URL) or by uploading a picture from their computer.
I am storing the URL in the database which works fine for the first use case but I need to figure out where to store the actual images if a user does a upload from their computer.
Is there any recommendation here or best practice on where I should store these?
Should I save them in the appdata or content folders? Should they not be stored with the website at all because it's user content?
You should NOT store the user uploads anywhere they can be directly accessed by a known URL within your site structure. This is a security risk as users could upload .htm file and .js files. Even a file with the correct extension can contain malicious code that can be executed in the context of your site by an authenticated user allowing server-side or client-side attacks.
See for example http://www.acunetix.com/websitesecurity/upload-forms-threat.htm and What security issues appear when users can upload their own files? which mention some of the issues you need to be aware of before you allow users to upload files and then present them for download within your site.
Don't put the files within your normal web site directory structure
Don't use the original file name the user gave you. You can add a content disposition header with the original file name so they can download it again as the same file name but the path and file name on the server shouldn't be something the user can influence.
Don't trust image files - resize them and offer only the resized version for subsequent download
Don't trust mime types or file extensions, open the file and manipulate it to make sure it's what it claims to be.
Limit the upload size and time.
Depending on the resources you have to implement something like this, it is extremely beneficial to store all this stuff in Amazon S3.
Once you get the upload you simply push it over to Amazon and pop the URL in your database as you're doing with the other images. As mentioned above it would probably be wise to open up the image and resize it before sending it over. This both checks it is actually an image and makes sure you don't accidentally present a full camera resolution image to an end user.
Doing this now will make it much, much easier if you ever have to migrate/failover your site and don't want to sync gigabytes of image assets.
One way is to store the image in a database table with a varbinary field.
Another way would be to store the image in the App_Data folder, and create a subfolder for each user (~/App_Data/[userid]/myImage.png).
For both approaches you'd need to create a separate action method that makes it possible to access the images.
While uploading images you need to verify the content of the file before uploading it. The file extension method is not trustable.
Use magic number method to verify the file content which will be an easy way.
See the stackoverflow post and see the list of magic numbers
One way of saving the file is converting it to binary format and save in our database and next method is using App_Data folder.
The storage option is based on your requirement. See this post also
Set upload limit by setting maxRequestLength property to Web.Config like this, where the size of file is specified in KB
<httpRuntime maxRequestLength="51200" executionTimeout="3600" />
You can save your trusted data just in parallel of htdocs/www folder so that any user can not access that folder. Also you can add .htaccess authentication on your trusted data (for .htaccess you should kept your .htpasswd file in parallel of htdocs/www folder) if you are using apache.

How do I generate files and then zip/compress with Heroku?

I sort of want to do the reverse of this.
Instead of unzipping and adding the collection files to S3 I want to
On user's request:
generate a bunch of xml files
zip the xml files with some images (pre-existing images hosted on s3)
download zip
Does anybody know agood way of doing this? I think I could manage this no problem on a normal machine but Heroku complicates things somewhat in that it has a read-only filesystem.
From the heroku documentation on the read-only filesystem:
There are two directories that are writeable: ./tmp and ./log (under your application root). If you wish to drop a file temporarily for the duration of the request, you can write to a filename like #{RAILS_ROOT}/tmp/myfile_#{Process.pid}. There is no guarantee that this file will be there on subsequent requests (although it might be), so this should not be used for any kind of permanent storage.
You should be able to pretty easily write your generated xml files to tmp/ and keep track of the names, download and write the s3 files to the same directory, and (maybe?) invoke a zip command as long as the output is in tmp/, then serve the file to the browser with the correct mime type to prompt a download. I would only be concerned with how big the filesize is and if heroku has an undocumented limit on what they'll allow in the tmp directory. Especially since you are only performing this action for a one-time download in the duration of a single request, I think you have a good chance of being able to do it.
Edit: Looking around a bit, you might be able to use something like RubyZip to create your zip file if you want to avoid calling system commands.

Finding unused images in a Rails app?

I'm familiar with tools like Deadweight for finding CSS not in use in your Rails app, but does anything exist for images? I'm sitting in a project with a massive directory of assets from working with a variety of designers and I'm trying to trim the fat in this project. It's especially a pain when moving assets to our CDN.
Any thoughts?
It depends greatly on the code using the images. It's always possible that a filename is computed (by concatenating two values or string substitution etc) so a simply grepping by filename isn't necessarily enough.
You could try running wget (probably already installed if you've got a linux machine, otherwise http://users.ugent.be/~bpuype/wget/ ) to mirror your whole site. Do this on the same machine or network if you can, it'll crawl your whole site and grab all the images
# mirror mysite.com accepting only jpg, png and gif files
wget -A jpg,png,gif --mirror www.mysite.com
Once you've done that, you're going to have a second copy of your site's hierarchy containing any images that are actively linked to by any page reachable by crawling your site. You can then backup your source image directory, and replace it with wget's copy. Next, monitor your log files for 404's pertaining to gif/jpg/png files. Hope that helps.
Finding unsed images should be easier than CSS.
Just find *.jpg *.png *gif with glob, put those filenames to dictionary or array and find those filenames againt html, css, js files, remove filename if found and you will get unused list, and move those images to another folder with same directory structure (It will be good for restoring for just in case)
Basically like this, and of course for the file names that encrypted/encoded/obcuscated will not work.
require "fileutils"
img=Dir.glob("**/*.jpg")+Dir.glob("**/*.png")+Dir.glob("**/*.gif")
data=Dir.glob("**/*.htm*")+Dir.glob("**/*.css")+Dir.glob("**/*.js")
puts img.length.to_s+" images found & "+data.length.to_s+" files found to search against"
content=""
data.each do |f|
content+=File.open(f, 'r').read
end
img.each do |m|
if not content=~ Regexp.new("\\b"+File.basename(m)+"\\b")
FileUtils.mkdir_p "../unused/"+File.dirname(m)
FileUtils.mv m,"../unused/"+m
puts "Image "+m+" moved to ../unused/"+File.dirname(m)+" folder"
end
end
PS: I used fileutils, because normal makedirs and mv are not works in my windows version of ruby
And I am not good at ruby, so please double check it before you use it.
Here is the sample results I ran in root folder of sample rails folder in my windows
---\ruby>ruby img_coverage.rb
5 images found & 12 files found to search against
Image depot/public/images/test.jpg moved to ../unused/depot/public/images folder
If your image URLs often come from many computed / concatenated strings and other stuff hard to track programmatically within your source code, and your application is in heavy use, you could try a soft "honeypot" approach like this:
Move all the assets to a different directory, e.g. /attic
Set up an empty /images directory (or what your asset directory is called)
Set up a .htaccess file (if you're on Apache of course) that, using the -f flag, redirects all requests to nonexistent image files to a script
The script copies the requested file from the /attic into the /images directory and displays it
The next request to that image will go directly to the image, because it exists now
After some time and sufficient usage, all needed images should have been copied to the assets directory.
It's a "soft" approach of course because a dialog / situation could have not been opened/entered/used by any user during that time (things like error message icons for example). But it will recognize all used files, no matter where they're requested from, and might help sort out much of the unneeded files.
If your file manager supports it, try sorting your images directory by the files' "last accessed" date. Files that haven't been accessed in a long time most likely aren't used any longer.
Along the same lines, you can also filter or grep through your web server's logs and make a list of the image files that it has served up in the last several months. Any images not in this list are likely unused.

Resources