PhpSpreadsheet - added row exceeds row limit in file - phpspreadsheet

I have a template file for creating files filled with data. The template file contains header, footer and one data row, which i clone (and copy it's style) according to my needs. The problem is that despite the fact that the template file has ~200 rows, if I add a single row, then when I try to open that file it complains that "Maximum number of rows was exceeded".
$reader = \PhpOffice\PhpSpreadsheet\IOFactory::createReader('Xlsx');
$info = $reader->listWorksheetInfo($filename);
// $info[0]['totalRows'] = 1048576 - so the maximum for Xlsx
This way everything works, no error on PHP side, but LibreOffice complains and the operation is extremely slow and memory consuming (especially inserting rows with insertRowsBefore, as it seems to iterate over all the cells to recalculate).
The Worksheet::calculateWorksheetDataDimension() method returns A1:HJ221 and if I apply a ReadFilter to read only rows and columns within the data dimensions... the columnns' width and height are gone. I'm not able to set them manually as header and footer are very complex. If I load the whole file then width, height and all styles are perfect, but operation is veeeery slow and consumes a lot of memory.
Any ideas why this happens? And how to avoid that?
PHP7.4
PHPSpreadsheet 1.22

Related

Confusion about Mach-O offsets and addresses

I’m looking in the Mach-O structure and there is one bit which I am confused over.
I understand the basic structure of a macho file. I'm trying to programmatically read the bytes in the first TEXT section in the first TEXT segment, and I have a pointer to the start of the Mach-O header. I am trying to compute the appropriate offset to add to that pointer so it points to the bytes in the TEXT section.
In order to obtain the data from the sections in segments, I would have to “take the offset of the segment command in the file, add the size of the segment structure, and then loop through nsects times, incrementing the offset by the size of the section struct each time” as mentioned in this article here: https://h3adsh0tzz.com/2020/01/macho-file-format/
However, with reference to the same article, in the “Data” section at the bottom of the page, the article also mentions that the memory addresses are relative to the start of the data and not the start of the Mach-O. In that case, why did we need to calculate all the offsets above if it is relative to the start of the data and not the Mach-O header?
Edit: Just a note, I'm interested in reading the bytes both in memory and on disk.

Deleting full lines from the begining of a logfile in Delphi

I've developed a Delphi service that writes a log to a file. Each entry is written to a new line. Once this logfile reaches a specific size limit, I'd like to trim the first X lines from the beginning of the file to keep its size below the specified limit. I've found some code here on SO which demonstrates how to delete chunks of data from the BOF, but how do I go about deleting full randomly sized lines and not given chunks?

How sqlite3 write capacity is calculated

I create test table
create table if not exists `HKDevice` (primaryID integer PRIMARY KEY AUTOINCREMENT,mac integer)
insert 1 row:
NSString *sql = #"insert into `HKDevice` (mac)values('0')";
int result = sqlite3_exec(_db, sql.UTF8String, NULL, NULL, &errorMesg);
disk report write 48kb
This is much bigger than I thought,I know integer size 4 Byte in sqlite,I think should be write total less than 10Byte.
the second write is also close to the size of the first,so I'm more confused......
Can someone tell me why?Thanks!
Writing every row individually to the file would be inefficient for larger operations, so the database always reads and writes entire pages.
Your INSERT command need to modify at least three pages: the table data, the system table that contains the AUTOINCREMENT counter, and the database change counter in the database header.
To make the changes atomic even in the case of interruptions, the database needs to save the old data of all changed pages in the rollback journal. So that's six pages overall.
If you do not use an explicit transaction around both commands, every command is automatically wrapped into an automatic transaction, so you get these writes for both commands. That's twelve pages overall.
With the default page size of 4 KB, this is the amount of writes you've seen.
(Apparently, the writes for the file system metadata are not shown in the report.)
There will be some metadata overhead involved. Firstly the database file will be created which must maintain the schema for the tables, indexes, sequences, custom functions etc. These all contribute to the disk space usage.
As an example, on Linux adding the database table that you define above to a new database results in a file of size 12288 bytes (12KB). If you more tables the space requirements will increase, as it does when you add data to the tables.
Furthermore, for efficiency reasons, databases typically write data to disk in `pages", i.e. a page (or block) of space is allocated and then written to. The page size will be selected to optimise I/O, for example 4096 bytes. So writing a single integer might require 4KB on disk if a new page is required. However, you will notice that writing a second integer will not consume any more space because there is sufficient space available in the existing page. Once the page becomes full a new page will be allocated, and the disk size could increase.
The above is a bit of a simplification. The strategies for database page allocation and management is a complicated subject. I'm sure that a search would find many resources with detailed information for SQLite.

Axlsx - Warning while opening a file

I was working on generating some excel reports and those sheets contains lot of data. I cannot use below line as I am not going in linear way
worksheet.add_row [array_of_data]
For this I first initialize a row in the below way
worksheet.add_row Array.new(maxmimum_columns, nil)
Then updated the value accordingly. Here 'maxmimum_columns' is calculated dynamically. I found that when this value becomes large and tries to open file in excel then while opening file I encounter a warning that file might have some virus. What should I do? I am unable to find some proper documentation. The maximum_colmns is less then the maximum columns of excel 2010 16384.
Thanks in advance.

what is the differnce between page file and index file in essbase?

what is the difference between .pag file and .ind file ?
I know the page file contains actual data means data-blocks and cells and index file holds the pointer of data block i.e. available in page file.
but is there any other difference ?regarding size?
As per my opinion size of page file is always larger than index file. Is it write?
If the size of Index file is larger than page file then what happened?If size of index file is larger than page file then is write?
If I have deleted the page file then it's affect to index file?
or
If I have deleted some data-block from page file then how is affect to index file?
You are correct about the page file including the actual data of the cube (although there is no data without the index, so in effect they are both the data).
Very typically the page files are bigger than the index. It's simply based on the number of dimensions and whether they are sparse or dense, the number of stored members in the dimensions, the density of the data blocks, the compression scheme used in the data blocks, and the number of index entries in the database.
It's not a requirement that one be larger than the other, it will simply depend on how you use the cube. I would advise you to not really worry about it unless you run into specific performance problems. At that point it is then useful, if for the purposes of optimizing retrieval, calc, or data load time, whether you should make a change to the configuration of the cube.
If you delete the page file it doesn't affect the index file necessarily, but you would lose all of the data in the cube. You would also lose the data if you just deleted all the index files. While the page files have data in them, as I mentioned, it is truly the combination of the page and index files that make up the data in the cube.
Under the right circumstances you can delete data from the database (such as doing a CLEARDATA operation) and you can reduce the size of the page files and/or the index. For example, deleting data such that you are clearing out some combination of sparse members may reduce the size of the index a bit as well as any data blocks associated with those index entries (that is, those particular combinations of sparse dimensions). It may be necessary to restructure and compact the cube in order for the size of the files to decrease. In fact, in some cases you can remove data and the size of the store files could grow.

Resources