I am making some light video editing in swift 4, reading the video with AVAsset() and then using the AVExportSession to export the result. Everything works fine except one thing: the resulted video keeps the metadata of the original video.
This metadata includes (for example) the time and location where the video was taken.
I saw that the AVExportSession has a metadata:[AVMetadataItem] property, but I don't know how to use it. I set it to nil and it didn't work, it still keept the old metadata.
I read the apple's documentation about and it says that you don't create instances nor can modify a metadata item, so how can I do? how can I erase that metadata or write new generated metadata to it?
There is a lot of info about reading metadata, but not much on writing it.
Thanks in advance.
Aditional links
https://developer.apple.com/documentation/avfoundation/avassetexportsession
You can filter metadata with AVMetadataItemFilter.forSharing().
From the spec: Removes user-identifying metadata items, such as location information and leaves only metadata releated to commerce or playback itself. (see https://developer.apple.com/documentation/avfoundation/avmetadataitemfilter/1387905-forsharing)
Just add it to your export session:
let exportSession = AVExportSession() // choose your appropriate init
exportSession.metadataItemFilter = AVMetadataItemFilter.forSharing()
Related
I wonder how photoswipe builds URL to a particular image in a gallery?
I see that for each image in a gallery the following URL is built: [BASE_URL]#&gid=2&pid=3. If I get it right, the pid/gid values are taken in the order of gallery/photo appearance on a page. Which presents problem for dynamic content when galleries/photos are shuffled (sorted, deleted, etc.)
Is there a way to overrule that logic such that static ID's (e.g. database PK's) are used?
Thank you.
After digging into the sources I found out that if gallery element has attribute data-pswp-uid= set to some value, then that value is used in URL as gid.
Unfortunately, same trick on an image/figure didn't do.
I'm using AVCaptureSession to record a video with audio. Everything seems to work properly for short videos, but for some reason, if I record a video that is longer than about 12 seconds, the audio doesn't work.
Edit (because this answer is still getting upvotes): This answer works to mitigate the problem but the likely root cause for the issue is addressed in #jfeldman's answer.
I found the solution as an answer to a completely different question.
The issue is the movieFragmentInterval property in AVCaptureMovieFileOutput.
The documentation for this property explains what these fragments are:
A QuickTime movie is comprised of media samples and a sample table
identifying their location in the file. A movie file without a sample
table is unreadable.
In a processed file, the sample table typically appears at the
beginning of the file. It may also appear at the end of the file, in
which case the header contains a pointer to the sample table at the
end. When a new movie file is being recorded, it is not possible to
write the sample table since the size of the file is not yet known.
Instead, the table is must be written when recording is complete. If
no other action is taken, this means that if the recording does not
complete successfully (for example, in the event of a crash), the file
data is unusable (because there is no sample table). By periodically
inserting “movie fragments” into the movie file, the sample table can
be built up incrementally. This means that if the file is not written
completely, the movie file is still usable (up to the point where the
last fragment was written).
It also says:
The default is 10 seconds. Set to kCMTimeInvalid to disable movie
fragment writing (not typically recommended).
So for some reason my recording is getting messed up whenever a fragment is written. I just added the line movieFileOutput.movieFragmentInterval = kCMTimeInvalid; (where movieFileOutput is the AVCaptureMovieFileOutput I've added to the AVCaptureSession) to disable fragment writing, and the audio now works.
We also experienced this issue. Basically disabling movie fragment writing will work but it doesn't actually explain the issue. Most likely you are recording to an output file using a file extension that does not support this feature, like mp4. If you pass an output file with the extension mov you should have no issues using movie fragment writing and the output file will have audio.
Updating videoFileOutput.movieFragmentInterval = kCMTimeInvalid solved this for me.
However, I accidentally set the movieFragmentInterval after calling startRecordingToOutputFileURL. An agonizing hour later I realized my mistake. For newbies like me, note this obvious sequence.
videoFileOutput.movieFragmentInterval = kCMTimeInvalid
videoFileOutput.startRecordingToOutputFileURL(filePath, recordingDelegate: recordingDelegate)
kCMTimeInvalid is now deprecated. This is how to assign it now:
videoFileOutput?.movieFragmentInterval = CMTime.invalid
I have a use case, where in after sending the video stream to the Red5 server, I would like to post process on the video after the video is saved. I would like to add some metadata tags on them.
I found that this can be done in appDisconnect() method in the ApplicationAdapter, but there are other ways via which the video can be saved like by using ClientBroadcastStream also.
Example
ClientBroadcastStream stream = (ClientBroadcastStream) app.getBroadcastStream(
conn.getScope(), "hostStream");
// Stop recording
stream.stopRecording();
I would like to know if there are any events which I can listen on (like which tells me that video is saved and is saved in this location with this filename) to do post-processing of the video. So that I need not put hooks in multiple places.
Thanks
The "ez" way is to implement your own ClientBroadcastStream by extending this base class. Then simply override the "stopRecording()" method. If you would like to take a moment and add an enhancement request on the issue tracker, I would be glad to look into adding scope events for this type of thing. With a scope event you could just listen for them anywhere and handle them appropriately. Red5 issue tracker: http://code.google.com/p/red5/issues/list
Using a custom stream class would be configured in the red5-common.xml like so:
<bean id="clientBroadcastStream" scope="prototype" lazy-init="true" class="com.mypackage.MyClientBroadcastStream">
</bean>
All I want to do is to upload an image into the Active Directory. So far I can update any AD information but the image. I have tried to search for some idea but came up with nothing so far.
Do I have to encode an image in a certain way? Do I just ldap-replace the jpegPhoto attribute with a byte-string of the photo?
Any hint towards a solution would be great.
Thanks in advance!
First of all, there is an attribute in Active directory called thumbnailPhoto. According to this Microsoft article The thumbNailPhoto attribute contains octet string type data. The AD interprets octet string data as an array of bytes.
If you want a sample code in C# you can get something here.
On the theorical point of view you can also inject a photo with LDIF using tools like "B64" to code your image file in base 64.
Secondly, On my point of view a Directory is not a database.
So, even if the attribute exists (created by netscape according to the OID 2.16.840.1.113730.3.1.35), even if Microsoft explain us how to put a picture into Active Directory, I think that it's better to register an URL, or a path to a file from a file system into a Directory.
I have no idea of the impact on performance of AD if I load each entry with 40 Ko (average size of a thumbnail photo). But I know that if there are bad written programs on the network, I mean kind of program that load all the attributes when they search an entry into the directory, this will considerably load the network.
I hope it helps.
JP
I had this issue and was able to get it working by creating a File stream and passing it through to #ldap.replace_attribute as a binary file. i.e.
thumbnail_stream = open("path_to_file")
#ldap.replace_attribute USERS_DN, :thumbnailPhoto, File.binread(thumbnail_stream)
Where #ldap is an instance of net/ldap, bound to AD. i.e.
#ldap = Net::LDAP.new
#ldap.host = ''
#ldap.port = ''
#ldap.auth USERNAME, PASSWORD
#ldap.bind
I've set a custom client for a FLVPlayback's netStream, to attach my own functions (onXMPData, onMetaData) to parse the various info myself. However, i'd still like to pass the meta data back to the VideoPlayer. How do I do this? I tried dispatching a METADATA_RECEIVED event with the metadata object (tried dispatching from the client, the netstream, the video player, the flvplayback..), but it does not work.
I gave up, and decided just to open two netstreams for the f4v. The regular, untampered flvplayback one.. and then another just for reading xmpdata.