I cant use sessions.
So heres the scenario: I want the user to upload an image, but that image needs to be a particular size. So I allow them to upload any size image, store that temporarily on the server (resize it so it fits on the webpage), display it back to the user, let the user crop it. I then send the crop details back to the server, crop the image and save it and use it as the users profile picture.
I tried to do all this before uploading, but apparently, its a security risk and not allowed.
So how do I temporarily store this file? What if the user does not come back before cropping, I dont want a large image like that sitting on my server. How would I go about removing the file in a stateless application like this?
Files are stored on a CDN.
There are lots of ways to solve this, but perhaps an easy way is that every time a file is uploaded, call a little routine that checks for, and deletes, any 'large' files that are over xxx minutes old.
Alternatively, schedule a job to do the same every xxx minutes in the task scheduler.
You can use TempData, which is similar to Session, but dies after being read.
Related
Users can create a PDF in my app which takes some time to generate, so it has to be done in a background job. No problem, but then there is a delay and the user must be notified that the PDF is ready.
So the first choice is to send an email with a download link or a push notification in the app itself. My preference is the push notification, so I guess ActionCable is the way to go? My app runs on Heroku, so is ActionCable also a good choice then or is another solution preferable?
Then there is another consideration, where to store the generated PDF until the user downloads it? I could upload it to Azure/S3/etc with ActiveStorage, or I could store it temporarily in an app folder and delete it after download. My preference is to do the last, because the PDF is there only for a few minutes and therefore the hassle to store it in the cloud is not really needed?
You have a very broad question here, which is very much dependent on the overall user needs and experience you want them to have.
I'll start with the simplest part, in terms of temporary storage of the PDF. There are several things to bear in mind here.
I would say that from a scalability, and application security standpoint, storing the PDF to the cloud is the way to go. Opening up writable directories on your application server carries a risk. Also, if you ever need to scale to more than one server, this will not work. Deleting items from cloud storage is not hard with the appropriate APIs.
Is it essential for the user to be authenticated in some way to download the PDF? This is more challenging if you push the PDF to a cloud bucket (unless you have the PDF named with a very complex, unguessable name, that name only accessed through the authenticated application). If the data is less sensitive, then your email notification can show the link directly, but you won't know easily if a user has retrieved the PDF and it is now ready to be deleted.
In terms of notification, I'd go with email for several reasons. Simplicity is the main one. Do you have experience with ActionCable? It appears simple on the surface, but there are many things to bear in mind when using it: infrastructure and UI being the major ones. Also, from a user experience perspective, are users likely to hang around in the application waiting for the PDF to be completed? What happens if they logout? How will they know the PDF is available?
If the timescale for generation of the PDF is short and absolutely optimized scalability is not a big deal, you could consider a simpler mechanism that checks for user notifications (a simple query onto a user_notifications table for example) for every user action, and use a flash or some other session flag that the UI can check and use to asynchronously retrieve the notification.
Just ideas. Impossible to give definitive answers.
I would like to know what is the proper way of caching image and store it to Parse.com, and load it back, update cache etc.
So here is the scenario:
I have a social network app where user can upload their profile picture.
Once the user upload the picture. I believe we should cache the image in the device.
User can also change their profile picture from the website.
My question is, If user update their profile picture. How could I detect the changes and update the cache? Image caching libraries detect from the URL. The problem is, the URL always stays the same.
So how do we know if the picture is already updated and re-download it to the device and replace the cache?
Thank you
You can
Set a validity period for the image cache. So image cache is reloaded with the latest data form server every, say, 24 hours or so.
Keep a 'timestamp' on your server whenever user uploads a profile picture and keep a local time stamp for every image URL on device when the image is cached. Check/compare time stamp on every app execution or every time the profile page is opened. When the server timestamp is newer, invalidate the cache and re-download the new image. Make sure local cache timestamp is updated every time the image is cached.
Maintain a file 'hash' string on the server. When you download the image file create a file hash locally and maintain it for every image URL. Compare local value to the server hash on every app execution or everytime the profile page is opened. If they are not the same, invalidate the cache and re-download the new image. Make sure local file hash is updated every time the file is downloaded. However, this will not be possible if your image caching module does not give you direct access to the downloaded physical file.
I assume you have clear idea of an image caching stratagy and hope this answers your question regarding 'how do we know if the picture is already updated and re-download it to the device and replace the cache?'.
If you want to know how to cache in image, you can use UIImage+AFNetworking.
Im developing a azure website where users can upload blob and metadata. I want uploaded stuff too be deleted after some time.
The only way i can think off is going for a cloudapp instead of a website with a worker role that checks like every hour if the uploaded file has expired and continue and delete it. However im going for a simple website here without workerroles.
I have a function that checks if the uploaded item should be deleted and if the user do something on the page i can easily call this function, BUT.. If the user isnt doing anything and the time runs out it wont delete it because the user never calls the function.. The storage will never be deleted. How would you solve this?
Thanks
Too broad to give one right answer, as you can solve this in many ways. But... from an objective perspective because you're using Web Sites I do suggest you look at Web Jobs and see if this might be the right tool for you (as this gives you the ability to run periodic jobs without the bulk of extra VMs in web/worker configuration). You'll still need a way to manage your metadata to know what to delete.
Regarding other Azure-specific built-in mechanisms, you can also consider queuing delete messages, with an invisibility time equal to the time the content is to be available. After that time expires, the queue message becomes visible, and any queue consumer would then see the message and be able to act on it. This can be your Web Job (which has SDK support for queues) or really any other mechanism you build.
Again, a very broad question with no single right answer, so I'm just pointing out the Azure-specific mechanisms that could help solve this particular problem.
Like David said in his answer, there can be many solutions to your problem. One solution could be to rely on blob itself. In this approach you can periodically fetch the list of blobs in the blob container and decide if the blob should be removed or not. The periodic fetching could be done through a Azure WebJob (if application is deployed as a website) or through a Azure Worker Role. Worker role approach is independent of how your main application is deployed. It could be deployed as a cloud service or as a website.
With that, there are two possible approaches you can take:
Rely on Blob's Last Modified Date: Whenever a blob is updated, its Last Modified property gets updated. You can use that to identify if the blob should be deleted or not. This approach would work best if the uploaded blob is never modified.
Rely on Blob's custom metadata: Whenever a blob is uploaded, you could set the upload date/time in blob's metadata. When you fetch the list of blobs, you could compare the upload date/time metadata value with the current date/time and decide if the blob should be deleted or not.
Another approach might be to use the container name to be the "expiry date"
This might make deletion easier, as you then could just remove expired containers
I decided to use Amazon S3 for document storage for an app I am creating. One issue I run into is while I need to upload the files to S3, I need to create a document object in my app so my users can perform CRUD actions.
One solution is to allow for a double upload. A user uploads a document to the server my Rails app lives on. I validate and create the object, then pass it on to S3. One issue with this is progress indicators become more complicated. Using most out-of-the-box plugins would show the client that file has finished uploading because it is on my server, but then there would be a decent delay when the file was going from my server to S3. This also introduces unnecessary bandwidth (at least it does not seem necessary)
The other solution I am thinking about is to upload the file directly to S3 with one AJAX request, and when that is successful, make a second AJAX request to store the object in my database. One issue here is that I would have to validate the file after it is uploaded which means I have to run some clean up code in S3 if the validation fails.
Both seem equally messy.
Does anyone have something more elegant working that they would not mind sharing? I would imagine this is a common situation with "cloud storage" being quite popular today. Maybe I am looking at this wrong.
Unless there's a particular reason not to use paperclip I'd highly recommend it. Used in conjunction with delayed job and delayed paperclip the user uploads the file to your server filesystem where you perform whatever validation you need. A delayed job then processes and stores it on s3. Really, really easy to set up and a better user experience.
We're using ASP.NET MVC and our action does this:
pull records from DB
mark records as downloaded
push zipped download to browser
Now the problem comes when the download doesn't complete for some reason - maybe the user clicks "Cancel" or IE pops up that download security bar. I'm wondering if there's an alternative solution.
Could we push the download to the user and then only mark records as downloaded when we're sure they've received the right number of bytes? I have to say that I'm struggling with this one and a solution which is as easy for end users as possible would be fantastic.
There isn't any reliable way to do this without a process running on the client which can verify the transfer completed. Of course, the only process we can reasonably expect the user to already have, or be willing to install, is Flash.
Only Flash 10 supports saving files directly to disk as the user requests. (Previous versions had a "shared object" which was kind of like a very large cookie space more than anything else - not for transferring files but saving reusable application data). Read up here for info on how to interact with the end-user's filesystem via Flash 10.
Essentially there is a method call save() which will push data to a location of the user's choosing. The specific location is hidden from your code; for obvious security reasons, you merely push the file into a black box and Flash handles the rest.
The only real bit of info missing here is how to get your file into the Flash player, but anyone with a little Flash experience should have no trouble figuring that out with a few minutes of research. Without Flash experience you should still have it working in under a day.
Rather than simply redirecting the user to the resource that is to be downloaded (there by causing the popup of would you like to download a file) you might try to two things. Push the resource out of a page as a byte array. Once the download has completed redirect the download page to another page. On this page you can then add to your workflow asking if the download went ok or not. Also, if they got this far you could assume (ass-u-me) that it worked. To actually track how far the download got I don't think is doable as you have nothing on the other end monitoring bytes received.
I don't believe there is. If this is necessary you may need to utilize a Silverlight (Or flash) control in conjunction with your application.
Basically the approach with either one would be to open a socket connection to the HTTP url and save it to the appropriate path on the User's drive. Once the download is complete you could have the control generate a hash value from the file and send that back to some ASP page. If the hash value is never submitted or is incorrect you know they didn't finish the file.
Even checking that all the bytes were sent doesn't really guarantee anything:
The user might still cancel the download before saving it, or their browser might crash, etc.
The recipient might not be the user. It might be a proxy server with a virus scanner that decides to block the transfer, etc.