Use 'send_ Video 'When uploading a video, the video will stretch - pyrogram

When "pyrogram" is used to upload videos in batches, the video frames will stretch and the video duration will not be displaye
Because there are many videos and different screen sizes, the specific width and height cannot be set
def main(video_file,fname):
def progress(current, total):
print(f"{current * 100 / total:.1f}%")
app.start()
app.send_video('chat_id',video_file, progress=progress,caption=fname,parse_mode=enums.ParseMode.MARKDOWN)
time.sleep(1)
with open(video_file, 'w',encoding='utf-8') as f:
f.write('')
f.close()
return app.stop()
Is there any solution to make the video display normally

Related

In mpeg-4 part 10, is the video resolution stored in a header in the video codecs or within each frame?

In mpeg-4 part 10 (AVC), is the video resolution stored in a header in the video codecs or within each frame, so that each frame can have its own resolution?
If both, which of either does main stream video players as VLC take in account when playbacking?

Scale the video up from a smaller size(Re-scale)

I'm actually looking for is to not just the quality but resize the entire video to a greater resolution using the AV foundation.
I have a videos in 320x240 and 176x144 in mp4 quality and I want to resize video upto size 1280x720, but AVAssetExportSession class not allow to scale the video up from a smaller size.
try AVMutableVideoCompositionLayerInstruction and CGAffineTransform.
This code will help the understanding.
https://gist.github.com/zrxq/9817265

iOS Video File Sizes and Bandwidth Considerations

I'm building an app whose core functionality is centered around 1-10 second videos. Currently, I'm recording video using PBJVision with the preset set to AVCaptureSessionPresetMedium. A 10 second video is around ~3-5MB. Considering each user could theoretically download hundreds or even thousands of videos a day, I was wondering if there was a more bandwidth efficient way of packing these videos up.
Could WebM be a more suitable container format?
I searched across the web, but couldn't find any articles pertaining to this specific question.
Edit: this looks promising
Modern video codecs (include WebM VP8) usually has compression ratio around 1/50. By adjusting codec parameters we can archive ~ 1/100 (IMHO), but very difficult and horrible picture quality.
Roughly, we can think of 1 camera pixel consist of 1.5 bytes (YUV 12 or 16 bits).
If the resolution is 720x480 and the frame rate is 30/sec,
720 x 480 x 1.5 x 30 = 15,552,000
x 10 sec = 155,520,000
/ 50 = 3,110,400
~= 3MB
It seems PBJVision doing good.
Reducing resolution, or lowering frame rate would be the first consideration, I think.
ios wont playback webm unless you use a software decoder. A software decoder will take more CPU/battery and produce more heat. And webm will not even solve your problem. What you want is to reduce the bitrate, but this will also reduce the quality. So its a trade off.

extract frame from video

I am working an app where I have to click a video then I need all frame of that video. May I use uiimagepickercontroller to create a video. Also tell me how to extract all frame from the video and save them.
e.g. if movie is having 30 FPS
then for 1 second clip, it will extract 30 images
so for 3 seconds, 30 x 3 images

Exporting AVCaptureSession video in a size that matches the preview layer

I'm recording video using AVCaptureSession with the session preset AVCaptureSessionPreset640x480. I'm using an AVCaptureVideoPreviewLayer in a non-standard size (300 x 300) with the gravity set to aspect fill while recording. It's setup like this:
self.previewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:_captureSession];
_previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
_previewLayer.frame = _previewView.bounds; // 300 x 300
[_previewView.layer addSublayer:_previewLayer];
After recording the video, I want to write it to a file in quicktime format. During playback, I'm once again playing the video in a 300 x 300 non-standard size layer. Because these videos will ultimately be transferred over a network connection, it seems wasteful to keep the full 640x480 video.
What's the best way to export a video to match my 300 x 300 preview layer? I'm an AVFoundation noob, so if I'm going about this the wrong way please let me know. I just want the recorded video displayed in the preview layer during recording to match the video that is exported on disk.
Video Resolution and Video Size are two different things. Resolution stands for clarity, higher resolution means higher clarity. Whereas, Video size is the bounds in which to display the video. Depending on the video resolution and aspect ratio of the video, the video will stretch or shrink, when seen in the viewer.
Now as the facts have been explained, You can find Your answer here:
How do I use AVFoundation to crop a video
Perform the steps in this order:
Record Video to disk.
Save from Disk to asset library.
Delete from disk.
Perform the steps mentioned in the above link.
NOTE: Perform the steps after recording and writing your video to asset library, saveComposition being the saved asset.
and provide your size in this step:videoComposition.renderSize = CGSizeMake(320, 240); as videoComposition.renderSize = CGSizeMake(300, 300);
And an advice. Since writing the file to disk, then to library, then again back to disk is kind of a lengthy operation. Try doing it all asynchronously using a dispatch_queue or operationBlock
Cheers, Have Fun.

Resources