I am trying to upload a media file to its container.
Default value of maximum file size for uploading is 10 MB and it is defined at ext-backoffice\backoffice\project.properties like this :
# Constraint for maximum upload file size (in KB)
backoffice.fileUpload.maxSize=10000
How can i override this value?
After some research i found this link .
But sadly it does not work for me. Do you have any idea?
You can change the value of this property (or any property) on the hAC (if you are an admin). But the value will stay until the server is restarted
the best approach is to add the property backoffice.fileUpload.maxSize= the value you want on the local.properties so the server read it every time it starts
Related
There is quite a lot posts out there that shows how to get the ESTIMATED file size of a MPMediaIteme. Such methods of using AVAssetExportSession gives a file size that is really off from the real size, like a file of 3M shows up as 10M. As there is no way to directly access a ipod-library:// schemed URL(at least not that I know of), I think I'm stuck with getting the file size indirectly.
I guess that the size is off because I set the preset to M4A when exporting, which may cause some transcoding when exporting, but if I set the preset to Passthrough, even with the timeRange set, the estimated size of the exporter will always give me zero.
How should I get the EXACT file size of a MPMediaItem in bytes?
As there is no way to directly access a ipod-library:// schemed URL(at least not that I know of)
So long as the asset is actually a file in the library (i.e. it's not something in the cloud), its asset URL is its assetURL. That is as close as you can get to a pointer to the file in storage; remember, you are sandboxed (the system is not going to let you mess about directly inside the directory where the user's songs are stored).
I'm looking for a solution to generate the processed files for one database in an explicitly via uid defined folder. F.e.:
fileadmin/_processed/<uid>/allProcessedFilesHere
The generation of the files happens via the following code at the moment and I am not able to figure out how to adjust the config array to pass different storage.
$settings['additionalParameters'] = '-quality 80';
$settings['width'] = $imageSettings["width"];
$settings['height'] = $imageSettings["height"];
$processedImage = $file->process(\TYPO3\CMS\Core\Resource\ProcessedFile::CONTEXT_IMAGECROPSCALEMASK, $settings);
So I am looking for something similar to the following, where $uid is just the id of the entry that images shall get processed:
$storageRepository = \TYPO3\CMS\Core\Utility\GeneralUtility::makeInstance('TYPO3\\CMS\\Core\\Resource\\StorageRepository');
$uidForStorageForDBEntry = getStorageUidForDBObject($uid);
$identifiedStorage = $storageRepository->findByUid($uidForStorageForDBEntry);
$settings['storage'] = $identifiedStorage->getUid()
To create one storage per uid seems not be the way to do it right, but I can't figure out another approach at the moment. As there are hundreds of objects with images in many different formats, I don't want to use a _processed folder with 100k image entries inside.
They are integrating the functionality to bind the processed folder to a storage element into the Typo3 Core. It should work in Version 7 LTS.
Is it possible for HDFS Flume sink to roll whenever a single file (from a Flume source, say Spooling Directory) ends, instead of rolling after certain bytes (hdfs.rollSize), time (hdfs.rollInterval), or events (hdfs.rollCount)?
Can Flume be configured so that a single file is a single event?
Thanks for your input.
Reagarding your first question, it is not possible due to the sinks logic is disconnected from the sources logic. I mean, a sink only sees events being put into the channel which must be processed by him; the sink does not know if an event is the first or the last regarding a file.
Of course, you could try to create your own source (or extend an existing one) in order to add a header to the event with a value meaning "this is the last event". Then, another custom sink could behave depending on such a header: for instance, if the header is not set, then the events are not persisted but stored in memory until the header is seen; then all the information is persisted in the final backend as a bach. Other possibility is that custom sink persists the data in a file until the header is seen; then the file is closed and another one is opened.
Regarding your second question, it depends on the sink. The spooldir source behaves based on the deserializer parameter; by default its value is LINE, what means:
Specify the deserializer used to parse the file into events. Defaults to parsing each line as an event. The class specified must implement EventDeserializer.Builder.
But other custom Java classes can be configured, as said above; for instance, a deserialized for the whole file.
You can set rollsize to a small number combined with BlobDeserializer to load file by file instead of combining into blocks. This is really helpful when you have unsplittable binary files such as PDF or gz files.
This is part of the configuration that is relevant:
#Set deserializer to BlobDeserializer and set the maximum blob size to be 1GB.
#Notice that the blobs have to fit in memory so this doesn't work for files that cannot fit in memory.
agent.sources.spool.deserializer = org.apache.flume.sink.solr.morphline.BlobDeserializer$Builder
agent.sources.spool.deserializer.maxBlobLength = 1000000000
#Set rollSize to 1024 to avoid combining multiple small files into one part.
agent.sinks.hdfsSink.hdfs.rollSize = 1024
agent.sinks.hdfsSink.hdfs.rollCount = 0
agent.sinks.hdfsSink.hdfs.rollInterval = 0
The answer to the question "Can Flume be configured so that a single file is a single event?" is yes.
Yo only have to configure the following property to be 1:
hdfs.rollCount = 1
I'm looking for a solution for your first question, because sometimes the file is too big and it's needed to split the file in several chunks.
You can use any event headers in hdfs.path. ( https://flume.apache.org/FlumeUserGuide.html#hdfs-sink )
If you are using Spooling Directory Source, you can enable putting the file name in the events using fileHeaderKey or basenameHeaderKey ( https://flume.apache.org/FlumeUserGuide.html#spooling-directory-source ).
Can Flume be configured so that a single file is a single event?
It could be, however it is not recommended. The underlying implementation (protobuf) limits file sizes to 64m. Flume events are to be small in size due to its architecture and design. (Fault-tolerance, etc.)
Since I started using Fireworks CS5.1, and it doesn't matter how I write the file name while exporting the slice, it always adds _s1 at the end, anyone knows how to turn that off?
Have you checked options while exporting? You can adjust the pattern for the file name. _s1 indicates state 1 so it's there by default as there is at least one state in the file.
In Delphi 7, I open a file with CreateFileMapping then get a pointer by using MapViewOfFile.
How can I expand the memory and add some characters to the memory and have it saved to that file?
I have already opened the file with appropriate modes (fmOpenReadWrite, PAGE_READWRITE),
and if I overwrite the characters, it gets saved to the file, but I need to add extra values in the middle of the file.
If the file mapping is backed by an actual file and not a block of memory, then you can resize the file in one of two ways:
call CreateFileMapping() with a size that exceeds the current file size. The file will be resized to match the new mapping.
use SetFilePointer() and SetEndOfFile() to resize the file directly, then call CreateFileMapping() with the new size.
Both conditions are described in the documentation for CreateFileMapping().
You cannot resize file mapping created with CreateFileMapping when it's already created. See earlier discussion on the topic: Windows: Resize shared memory .