multiple file upload with maximum size in yii2 - post

i am trying to upload multiple file at once,
my max_upload_size = 20 it works fine for total post size less than 20MB
[['fileUpload'], 'file','maxFiles' => 6]
but when i upload file greter than 20MB it shows me error message
i also handel exception in controller , but program execution not entring in controolers action
help me to solve this problem

You have to change following settings in php.ini file
; Maximum allowed size for uploaded files.
upload_max_filesize = 80M // set this as per your requirement
; Must be greater than or equal to upload_max_filesize
post_max_size = 80M // // set this as per your requirement
max_execution_time = 60 // time is in seconds
After modifying you need to restart your HTTP server to use apply configuration.

Related

What are the default paths added to AIDE's database?

Please excuse me for my English ^^'
I'm trying to answer to my title question.
There is the content of my /etc/aide/aide.conf :
# AIDE conf
# The daily cron job depends on these paths
database=file:/var/lib/aide/aide.db
database_out=file:/var/lib/aide/aide.db.new
database_new=file:/var/lib/aide/aide.db.new
gzip_dbout=no
# Set to no to disable summarize_changes option.
summarize_changes=yes
# Set to no to disable grouping of files in report.
grouped=yes
# standard verbose level
verbose = 6
# Set to yes to print the checksums in the report in hex format
report_base16 = no
# if you want to sacrifice security for speed, remove some of these
# checksums. Whirlpool is broken on sparc and sparc64 (see #429180,
# #420547, #152203).
Checksums = sha256+sha512+rmd160+haval+gost+crc32+tiger
# The checksums of the databases to be printed in the report
# Set to 'E' to disable.
database_attrs = Checksums
# check permissions, owner, group and file type
OwnerMode = p+u+g+ftype
# Check size and block count
Size = s+b
# Files that stay static
InodeData = OwnerMode+n+i+Size+l+X
StaticFile = m+c+Checksums
# Files that stay static but are copied to a ram disk on startup
# (causing different inode)
RamdiskData = InodeData-i
# Check everything
Full = InodeData+StaticFile
# Files that change their mtimes or ctimes but not their contents
VarTime = InodeData+Checksums
# Files that are recreated regularly but do not change their contents
VarInode = VarTime-i
# Files that change their contents during system operation
VarFile = OwnerMode+n+l+X
# Directories that change their contents during system operation
VarDir = OwnerMode+n+i+X
# Directories that are recreated regularly and change their contents
VarDirInode = OwnerMode+n+X
# Directories that change their mtimes or ctimes but not their contents
VarDirTime = InodeData
# Logs grow in size. Log rotation of these logs will be reported, so
# this should only be used for logs that are not rotated daily.
Log = OwnerMode+n+S+X
# Logs that are frequently rotated
FreqRotLog = Log-S
# The first instance of a rotated log: After the log has stopped being
# written to, but before rotation
LowLog = Log-S
# Rotated logs change their file name but retain all their other properties
SerMemberLog = Full+I
# The first instance of a compressed, rotated log: After a LowLog was
# compressed.
LoSerMemberLog = SerMemberLog+ANF
# The last instance of a compressed, rotated log: After this name, a log
# will be removed
HiSerMemberLog = SerMemberLog+ARF
# Not-yet-compressed log created by logrotate's dateext option:
# These files appear one rotation (renamed from the live log) and are gone
# the next rotation (being compressed)
LowDELog = SerMemberLog+ANF+ARF
# Compressed log created by logrotate's dateext option: These files appear
# once and are not touched any more.
SerMemberDELog = Full+ANF
I don't understand why AIDE adds just over 400.000 entries to the new database when I execute the following command : update-aide.conf ; aideinit
In the config file there is nowhere selection lines or restricted selection lines, so I'm wondering if AIDE doesn't add some by default.
I'm on Ubuntu 18.04.4 so the package aide comes with aide-common wrapper package.
I would like to have a clean aide.conf file but when I tried to delete SerMemberDELog = Full+ANF for example, I get the following error :
846:Error in expression:
Configuration error
error checking aide config, not running aide
AIDE --init return code 255
Big thanks to anyone who will help me :) !
If you need more details I'm always here.
Finally I managed to solve my problem,
The /etc/aide/aide.conf config file isn't the unique file used by AIDE,
when you run update-aide.conf wrapper, it actually uses this file and many other conf files present in the /etc/aide/aide.conf.d directory.
Easy fix is to move or delete these files and from now you will be able to clean your /etc/aide/aide.conf file :)
Have a good day !

Upload File with unknown size OneDrive

So I am uploading a file to one drive using a resumable file upload session. However I cannot know the size of the file before uploading. I know that google allows uploading of files with content ranges like
Content-Range: 0-128/*
I would assume OneDrive would also allow it as it is even specified in RFC2616
Content-Range = "Content-Range" ":" content-range-spec
content-range-spec = byte-content-range-spec
byte-content-range-spec = bytes-unit SP
byte-range-resp-spec "/"
( instance-length | "*" )
byte-range-resp-spec = (first-byte-pos "-" last-byte-pos)
| "*"
instance-length = 1*DIGIT
The header SHOULD indicate the total length of the full
entity-body, unless this length is unknown or difficult to
determine. The asterisk "*" character means that the
instance-length is unknown at the time when the response was
generated.
But after reading the OneDrive documentation I found it says
Example: In this example, the app is uploading the first 26 bytes of a
128 byte file.
The Content-Length header specifies the size of the current request.
The Content-Range header indicates the range of bytes in the overall
file that this request represents.
The total length of the file is
known before you can upload the first fragment of the file. HTTP
Copy PUT https://sn3302.up.1drv.com/up/fe6987415ace7X4e1eF866337
Content-Length: 26 Content-Range: bytes 0-25/128
<bytes 0-25 of the file>
Important: Your app must ensure the total
file size specified in the Content-Range header is the same for all
requests. If a byte range declares a different file size, the request
will fail.
Maybe I'm just reading the documentation wrong or it only applies to this example but before I go and build a entire uploader I would just like clarification if this is allowed for OneDrive
My Question is
Does OneDrive allow uploading of files with unknown sizes?
Thanks in advance for your help
For anyone else wondering, no you can't, file uploads must specify a size.

Upload size limit with parallels plesk >12.0 : what conf?

I have a big problem concerning the file upload limit (I need a large size, around 2Go) : I was using my app in /var/www/vhost/default and it was working perfectly, I decided to change it and use /var/www/vhost/mydomain.com to have it throught the plesk panel, and there I have an upload limit than I need to push. I can't upload files larger than 128Mo and I don't know why.
I have checked all php.ini files (with locate php.ini) and they are all correct.
I used plesk panel to set php conf -> done.
I put :
php_value memory_limit 2000M
php_value upload_max_filesize 2000M
php_value post_max_size 2000M
in my .htaccess in htdocs
I put a vhost.conf and a vhost_ssl.conf in /var/www/vhost/mydomain.com/conf with :
< Directory /var/www/vhosts/mydomain.com/htdocs/>
php_value upload_max_filesize 2000M
php_value post_max_size 2000M
php_value memory_limit 2000M
< / Directory>
I disabled nginx.
I edited /usr/local/psa/admin/conf/templates/default/domain/domainVirtualHost.php to put :
FcgidMaxRequestLen 2147483648
I tried :
grep attach_size_limit /etc/psa-webmail/horde/imp/conf.php
$conf['compose']['link_attach_size_limit'] = 0;
$conf['compose']['attach_size_limit'] = 0;
I reload/restart apache2, psa, ... And it still doesn't work, I have no more idea every conf file seems correct. It's not a permission problem because I can upload some 80Mo files but not 500Mo ...
Someone has an idea ?? I need to fix it fast
Thanx !!
So, to begin with, I'm now in charge of the problem explained above.
That said, it is solved.
I'll explain the steps there if someone ever encounters something similar, so this topic might help.
Three problems here, in fact :
First, nginx overridden by plesk default templates. So you need to create (if it doesn't exist) a "custom" folder in "/usr/local/psa/admin/conf/templates", then copy and paste choosen files (here: /usr/local/psa/admin/conf/templates/default/domain/nginxDomainVirtualHost.php) in the custom folder (keep the hierarchy of the folders when copying the files). Modify your files as you want (client_max_body_size here), check that your files are valid php using "php -l nginxDomainVirtualHost.php", and generate new files using this command : "/usr/local/psa/admin/bin/httpdmng --reconfigure-all" (you might use another --reconfigure option). That's it. Source: link
Second: Copy and paste "/usr/local/psa/admin/conf/default/domain/domainVirtualHost.php" into the custom folder mentioned above, edit the line where FcgidMaxRequestLen is to your own value. Save and check that your php file is valid. Generate new configuration files. Source: link
Third: there is a php menu named "PHP settings" under "Website and domains" on Plesk, there you can override config files by applying custom value directly for memory_limit, post_max_size, and upload_max_filesize. Of course, php.ini files were changed accordingly prior to that (cf. OP's post).
That was the last thing that keeped us from uploading bigger files.
Can you please try to check your php upload limit by creating phpinfo file under account. If it's showing correct value and your application is not working then try to update your /etc/httpd/conf.d/fcgid.conf file.

How Do I read size of an directory in a softlayer cloud storage, using jCloud Swift api

I am trying to set a quota based on directory, say about 5 mb. And Hence I need to read the size of present blobs in a directory.
I am able to get the size of individual BLOB using the below code,
if(containerName!=null && objectName!=null){
BlobMetadata metaData = blobStore.blobMetadata(containerName, objectName);
if(metaData !=null){
userMetaData = metaData.getUserMetadata();
}
ContentMetadata contMetadata = metaData.getContentMetadata();
System.out.println("Object Size "+ contMetadata.getContentLength());
}
Just not able to find out if I can check the size of all the blobs in a directory with out looping through all the blob metadata.
The container will have that info:
http://jclouds.apache.org/reference/javadoc/1.9.x/org/jclouds/openstack/swift/v1/domain/Container.html#getBytesUsed()

Mobile Safari makes multiple video requests

I am designing a web application for iPad which makes use of HTML5 in mobile safari. I am transmitting the file manually through an ASP.NET .ashx file hosted on IIS 7 running .NET Framework v2.0.
The essential code looks partly like this:
// If we receive range header only transmit partial file
if (context.Request.Headers["Range"] != null)
{
var fi = new FileInfo(filePath);
long fileSize = fi.Length;
// Read start/end index
string headerRange = context.Request.Headers["Range"].Replace("bytes=", "");
string[] range = headerRange.Split('-');
int startIndex = Convert.ToInt32(range[0]);
int endIndex = Convert.ToInt32(range[1]);
// Add header Content-Range,Last-Modified
context.Response.StatusCode = (int)HttpStatusCode.PartialContent;
context.Response.AddHeader(HttpWorkerRequest.GetKnownResponseHeaderName(HttpWorkerRequest.HeaderContentRange), String.Format("bytes {0}-{1}/{2}", startIndex, endIndex, fileSize));
context.Response.AddHeader(HttpWorkerRequest.GetKnownResponseHeaderName(HttpWorkerRequest.HeaderLastModified), String.Format("{0:r}", fi.CreationTime));
long length = (endIndex - startIndex) + 1;
context.Response.TransmitFile(filePath, startIndex, length);
}
else
context.Response.TransmitFile(filePath);
Now what confuses me to no end is the the protocols for requesting that safari seems to use. From proxying the requests through fiddler i get the following for an aprox 2MB file.
NOTE: When requesting an mp4 file, directly served through IIS 7, the protocol and amount of request are the same
First it requests 2 bytes which allows it to read the 'Content-Range' header.
Now it request the entire content (?)
-
It proceeds to do step 1. & 2. again (??)
-
It now requests only parts of the file (???)
If the file is larger the last steps will be many more. I have tested up to 99 request where each request contains a part of the file equally split. This makes sense and is what would be expected I think. What I cannot make sense of is why it makes 2 initial request for the first 2 bytes as well as 2 requests for the entire file before it finally requests the file in different parts.
As I conclude this results in the file being downloaded between 2 - 3 times, depending on the length of the file and whether the user watches it long enough.
Can anybody make sense of this behavior and maybe explain what I can do to prevent multiple downloads. Thanks.
Per my comment to your question, I've had a similar issue in the past. One thing you could try if you have control of the server (I did not) is to disable either gzip or identity encoding of the file. I believe that in the first request for the entire content (#2 in your list) it asks for the content with gzip encoding (compressed). Perhaps you can configure your IIS to not to serve the file for a gzip-encoding request.
Here is my original (unanswered) question on the subject:
https://stackoverflow.com/questions/4855485/mpmovieplayercontroller-not-playing-full-length-mp3

Resources