Working with Grails 3.* Framework.
Iam trying to upload a video file, the size will be grater than 10 MB.
Below is the code, which works perfectly for storing image files in server, which is uploaded from client browser
File fileIns = new File("/opt/a.jpg")
fileIns << params.image
Can I use the same code for saving videos.
if so, params.video may consume a huge memory, how can I optimize it.
Or any other methods to stream part by part and save it as video?
How whatsapp and telegram are used for transferring video files into the server?
Any suggestions is appreciated.
If You are dealing with large data, I'd suggest going full java.
Check this link about using Streaming.
Another chance is using HTTPClient. Read this
Related
Our main app allows our users to post media (videos and images) as well as documents on the timeline with a file size limit of 500 megabytes on a timeline.
We're currently working on a Share Extension to allow users to share files throughout the OS to that timeline. However we're running into the issue that the Share Extension has a hard memory limit of 120 megabytes.
The current implementation in our main app requires that the files selected by the user get converted to a Data object before getting compressed and then uploaded to the API via multipart form data. However to achieve this we must load the files into memory where we run into the hard memory limit.
Apple documentation is very brief and there's not a lot to be found on SO or elsewhere on how to achieve this. There are some workarounds found by storing these files (or references) in UserDefaults and then opening the main app to handle them but that kind of defeats the purpose of sharing something quickly via the extension.
What would be a way around this limit to allow us to upload these large files?
Late to this question I ran into the same issue now. The issue is the memory limit imposed by Apple on File extensions, so you can not use a data object in memory if that object is more than around 50 - 100 MB. I changed the upload to an NSURLSessionUploadTask uploadTaskWithRequest:fromFile:completionHandler:, where I upload the file from the file system directly. If you can not change that you need to provide the multipart form data and boundaries in the delegate of uploadTaskWithStreamedRequest: but consider that also here you can not read the whole file at once but have to stream it as well.
I have a small webapp that runs on a server intended for use in a setting without reliable internet access (i.e., the app can't depend on outside resources during production). The purpose of the app is simply to upload image files, read the metadata, and categorize them in the right location on the disk. Up until recently there was no problem with this process, then I noticed that some of the files did not have all of the metadata attached (specifically the creation date). Upon further inspection, it appears that these are files that were shot on my iPhone as HEIC/HEIF photos and uploaded directly to the webpage from the phone.
Due to the design of the webapp, the filename of the uploaded file is shown on the page. Every time an HEIC photo is uploaded it displays the filename as ending in .jpeg.
I've had a hard time finding good documentation on this, but it sounds like the default for the iPhone at this point is to convert HEIC files to jpeg if it looks like they are transferring to a location that may not be able to read them. I guess a website form falls into this category. It also appears that as part of this conversion some of the EXIF data disappears.
So, does anyone know a way to retain the EXIF data? My primary limitation here is that the upload needs to happen through the webapp and that multiple users will be using this. As a result, I can't simply have everyone change their iPhone settings to only shoot jpegs.
In case it matters, the webapp is running on node.js and expressjs.
I am using Amazon S3 as my image storage platform for my app. However, I am finding that the download speed is quite slow. I am using Alamofire with the link to the image in S3 to download my image. I want to test the following
Download speed from Amazon S3 (If it is too slow I need to consider other service). I want to test it on my phone (Wifi, and Cellular) and also find out the maximum download speed.
My images are approximately 200kb to 400kb each. They are displayed in tableview cells. I am looking for an accurate bench mark to compare with what Amazon would expect me to get. Is there some sort of smart way or framework to do so?
The AWS iOS SDK can be used to facilitate your use of S3.
There is a downloadToURL method that will help download files.
https://github.com/aws/aws-sdk-ios/blob/master/AWSS3/AWSS3TransferUtility.m#L358
You can experiment with accelerated mode to see if this solves your problem.
https://github.com/aws/aws-sdk-ios/blob/master/AWSS3/AWSS3TransferUtility.m#L392
I have a question, my app is a short video share application just like vine, but now I encounter questions when used in subway or some places with weak signals, it will fail sometimes and have poor user experience.
I am a newbie for network programming and iOS. I did a lot search on Google, and have some general sense, let me sum up my finds and pls help to give some suggestions for it.
My requirement is:1. support resume when uploading interrupt. 2. can success upload in weak signal. Actually I do NOT need to think about the realtime problems or how to compress the video, just think the video as a file is totally ok. BTW the server is a REST style, I use post to upload datas.
Questions:
which is the better way for my requirement, using stream(stream NOT mean live stream video just data stream like NSOutStream&NSInputStream, just play the video after all of it has uploaded, NOT the live stream video playing and downloading at meantime) or divide the whole file into several chunks and upload chunk by chunk.
someone said, using stream is good for resource efficiency since the stream will read files into memory and control the size of the buffer and after setup connection with server we use delegate to control the failure so easy to use.
Upload chunk by chunk is good at speed, I have puzzled with this statement, upload by chunks after successfully upload one chunk we need to release the connection resources and setup another connection then do upload I think this will spend time to do these preparation stuffs.
If upload by chunks which size should be good, one video file is almost 1M bytes, someone said 8k is a safe choice, but......
since the app needs to adapt to different signal strength, is there any way? for example the chunk size is depended on the bandwidth or other ways
Is there any private API already support resume uploading interrupt or is there any apple api can support this, my app needs to run on iOS 5 and above so can NOT use NSURLSession
Concurrent uploading is a way to speed up? If so how to implement or any API available?
Thank you in advance for helping a newbie like me. Thank you very much.
It takes o lot of topics your question. iOS doesn't have an public API to stream video (such as the face time components). The main issue here is sending frame by frame will require a lot of network traffic, instead if you use the normal video writer you get hardware compression, that will be a lot better. There's more and you can check here: Realtime Audio/Video Streaming FROM iPhone to another device (Browser, or iPhone), Upload live streaming video from iPhone like Ustream or Qik, How send to stream video from iOS device to server? and here
If real time is not your problem I would suggest you just to use a good network manager such as: MKNetworkkit or AFNetworking 2.0 . They will take care of most of the aspect that you asked.
I am building a CDN. I want to be able to stream to an iPhone and iPad. Is this possible using Amazon Cloudfront?
Let me clarify. Is there any documentation anywhere or an example anywhere of someone doing this?
Progressive download works if you ensure that the media's metadata is at the beginning of the file. Google "ffmpeg qtfastart" to accomplish this in the easiest manner (in my experience). If this is not done, the player (in iOS) must download the complete file before it gets to the metadata that it needs to read in order to play. If you are not doing this step in your production workflow, then your progressive download is not functioning as "progressive download", it is actually downloading the entire file (as stated before...so it can get to the metadata) and then playing. This can be done with any video/audio file supported by your platform.
NOTE: I am not sure how this affects any attempts at high-speed scrubbing. It seems the file would need to be downloaded to the point that the app is trying to scrub to.
Another alternative may be to create the format needed for iOS streaming (using a segmenter/transcoder), and serving up those files through http on your regular Cloudfront distribution. Theoretically that should work.
To be more clear - Cloudfront uses and older version of Flash Media Server(v 3.5) that supports streaming through various RTMP protocols. These can be enabled by creating a Streaming Distribution (that is how we do streaming for web and Android) and using something like JW Player on the front end.
http://help.adobe.com/en_US/FlashMediaServer/3.5_TechOverview/WS5b3ccc516d4fbf351e63e3d119ed944a1a-7ffa.html
http://www.adobe.com/devnet/logged_in/ktowes_fms35.html
IOS streaming is done using HTTP Live Streaming which is different. https://developer.apple.com/streaming/
Your options would be to do as I mentioned above, or use EC2 and stand up your own FMS 4.5 instance ( http://aws.typepad.com/aws/2012/03/live-streaming-cloudfront-fms-4-5.html ).
Have struggled a lot over this..
Finally got it working through Audio Streamer.. Love this ...
http://www.cocoawithlove.com/2009/06/revisiting-old-post-streaming-and.html
Awesome way ....
You simply want to use Progressive Download, which means upload the file to S3, create a distribution, and go! It's super simple.