PhpSpreadsheet load Excel file from memory rather than a file? - phpspreadsheet

I'm downloading an Excel file from an Azure Storage Blob and therefore want to use stream_get_contents to get the file. But PhpSpreadsheet seems to only want to read the file off the filesystem.
For now, I'm saving it to a temp folder and reading it back, but that is less than ideal.
Is there a way to get PhpSpreadsheet to load via something other than a local file?

This is not supported. PhpSpreadsheet will always read from disk.
On a side note, since 1.13.0, PhpSpreasheet is able to write in memory. See https://github.com/PHPOffice/PhpSpreadsheet/pull/1292

Related

how to handle the data encoding issue while copying the data from CSV file to parquet using Azure copy activity?

I have a CSV file that I wanted to convert to the parquet the CSV file contains the value Querý in one column
So I am using use copy activity from the azure data factory and converting to the parquet but I get the value as Queryý. I don't find any enoding option in the sink. I have seen a few documentation but everything says about the CSV file ending. Could someone help with this?
There is no way to set the encoding of parquet in Azure Data Factory.
I created a pipeline to test and it can work fine.
Here are some advice for you to troubleshoot:
Make sure the encoding of your csv file is correct.
Make sure your schema of Parquet is correct.

active storage - how to modify file and resave it

I am trying to modify uploaded file once it is uploaded but I want to modify it (e.g. do some imagemagick operations on it).
I was trying to download file into tmp directory, modify file and then re-upload it.
Is there a nice way to do this?

What is the recommended approach to parse a CSV file stored in S3?

I am using the aws-sdk gem to read a CSV file stored in AWS S3.
Referencing the AWS doc. So far I have:
Aws::S3::Resource.new.bucket(ENV['AWS_BUCKET_NAME']).object(s3_key).get({ response_target: "#{Rails.root}/tmp/items.csv" })
In Pry, this returns:
output error: #<IOError: closed stream>
However, navigating to tmp/. I can see the items.csv file and it contains the right content. I am not certain wether the return value is an actual error.
My second concern. Is it fine to store temporary files in "#{Rails.root}/tmp/"?
Or should I consider another approach?
I can load the file in memory and then CSV.parse. Will this have implications if the CSV file is huge?
I'm not sure how to synchronously return a file object using the aws gem.
But I can offer some advice on the other topics you mentioned.
First of all, /tmp - I've found that saving files here is a working approach. On AWS, I've used this directory to create a local LRU cache for S3-stored images. The key thing is to preemp the situation where the file has been automatically deleted. The file needs to be refetched if this happens. By the way, Heroku has a 'read-only filesystem' but still permits you to write into /tmp.
The second part is the question of synchronously returning a file object.
While it may be possible to do this using the S3 gem, I've found success fetching it over HTTP using something like open-uri or mechanize. If it's not supposed to be a publically-available asset, you can change the permissions on S3 to restrict access to your server.

Create database from dump file in iOS app during first time launch

I am creating an iOS application which has a huge pre-populated sqlite database. The database file is around 140MB. I have taken a dump of this db, and compressed it in RAR format, and now its size is around 16MB.
I want to know if its possible to bundle the dump file(16MB) with the iOS application, uncompress the .rar file and create the database during runtime (i.e., during first launch of the application).
I have found a library https://github.com/ararog/Unrar4iOS for uncompressing files in iOS, but I still want to know how to create database from the dump file after extraction.
Thanks in advance for answers.
Just uncompress the file and save it into a good spot in your file hierarchy. iOS knows about SQLite, so just use your database as normal. The important thing is to copy the file out of your bundle, because it won't be writable in your bundle, but your decompression will accomplish that. You need to be careful about iCloud, it won't like sharing a file of that size, so set it to be non-shared.

Reading Excel files with roo /rails

I am using the rails gem called roo to read and parse uploaded Excel and CSV files.
I understand that in roo, the way it reads an Excel file is Excel.new("myfilename"). I am facing issue because I have to read the file uploaded with form helper (upload helper), temp file. I am saving the temp file before reading it with roo/Excel.
Though I am uplaoding good excel files, I am getting
the file is not an Excel/xlsx
error.
Is there a way to directly read from Uploaded IO?
Can you guys tell me what am I doing wrong here?
Thanks!
If you are developing on a Windows box, when you open files, you have to add a 'b' (binary) to the file mode, i.e:
File.open("spreadsheet.xls","rb")
for read only, binary.
Not sure if that's your problem, but I faced a similar problem and that was the solution.
good luck
I am not familiar with roo, but I have used http://rubygems.org/gems/parseexcel
workbook = Spreadsheet::ParseExcel.parse("#{Dir.getwd}/public/excel/foo.xls")

Resources