Convert data source UICollectionView to CSV file - ios

I search how to convert UICollectionView to CSV file and send it with Mail.
I have a collection view like the photo and I want to export the table and send it. I search and found that the best way is to convert to CSV file.
If you have other suggestion, just tell me.

As #Larme has pointed out, converting this to a CSV file has nothing to do with the visual representation in the collection view. You simply need to parse the data source to CSV. CSV stands for Comma Separated Value, which in turn means a type of file where tabular data is encoded using a delimiter between each data point (this is generally a comma, but could be anything), and a new line for each line of the table. Think of the delimiter as the vertical line between each column of the table, and the new line as the row:
So your CSV text file might look like this:
TITLEFORCOLUMN1, TITLEFORCOLUMN2, TITLEFORCOLUMN3
ROWTITLEONE, 200, 300
ROWTITLETWO, 400, 500
and so on. It's not quite this simple, and there are rules that you should follow, especially if you intend the CSV file to be consumed by third parties. There is an official specification which you can look at, but you can also get a lot of tips by searching 'CSV file specification'.
You then need to create a string by iterating through your data source. Start off by creating the line specifying the headers, then add a newline character and then add your data. So for the above example you could do something like (assuming the data is set out as a two dimensional array)
var myCSVString : String = "TITLEFORCOLUMN1, TITLEFORCOLUMN2, TITLEFORCOLUMN3\n"
for lineItem in myDataSource {
myCSVString = myCSVString + lineItem[0] + ", " + lineItem[1] + ", " + lineItem[2] + "\n"
}
Then write the string to file.
You'll need to do more research yourself but hopefully that will set you off in the right direction.

Related

Converting string to csv

I have a piece of code in Ruby which essentially adds multiple lines into a csv through the use of
csv_out << listX
I have both a header that I supply in the **options and regular data.
And I am having a problem when I try to view the CSV, mainly that all the values are in one row and it looks to me that any software fails to recognize '\n' as a line separator.
Input example:
Make, Year, Mileage\n,Ford,2019,10000\nAudi, 2000, 100000
Output dimensions:
8x1 table
Desired dimensions:
3x3 table
Any idea of how to go around that? Either by replacing '\n' with something or using something else than csv.generate
csv = CSV.generate(encoding: 'UTF=8') do |csv_out|
csv_out << headers
data.each do |row|
csv_out << row.values
end
The problem seems to be the data.each part. Assuming that data holds the string you have posted, this loop is executed only once, and the string is written into a single row.
You have to loop over the indivdual pieces of data, for instance with
data.split("\n").each

Neo4j - Load CSV with headers containing dots?

If my CSV file consists of headers with dots:
column1.name, column2.age, column3.city ...
How can I read them? Should I avoid dots?
LOAD CSV FROM "URL..." AS row RETURN row.`column1.name`, toInteger(row.`column2.age`)
Importing form csv is usually something you do for initial creation of the graph. So I would not worry too much about what looks pretty as long as it gets the job done.
It is absolutely fine to have dots in the headers and you can use:
RETURN row.`column1.name` as name, row.`column.age` as age
If you want to avoid using back ticks use:
RETURN row['column1.name'] as name, row['column.age'] as age

beam.io.WriteToText add new line after each value - can it be removed?

My pipeline looks similar to the following:
parDo return list per processed line | beam.io.WriteToText
beam.io.WriteToText adds a new line after each list element. How can I remove this new line and have the values separated by comma so I will be able to build CSV file
Any help is very appreciated!
Thanks,
eilalan
To remove the newline char, you can use this:
beam.io.WriteToText(append_trailing_newlines=False)
But for adding commas between your values, there's no out-of-the-box feature on TextIO to convert to CSV. But, you can check this answer for a user defined PTransform that can be applied to your PCollection in order to convert dictionary data into csv data.

Reading "Awkward" CSV Files with FSharp CsvParser

I have a large file (200K - 300K lines of text).
It's almost but not quite a CSV file.
The column headers are on the second row, there's a row of dummy text
before that.
There are rows interspersed with the actual data rows. They have
commas, but most of the columns are blank. They aren't relevant to me.
I need to read this file efficiently, and parse the lines that actually are
valid, as CSV data.
My first idea was to write a clean procedure that strips out the first line, and the blank lines, leaving only the headers and details that I want
in a CSV File that the CsvParser can read.
This is easy enough, just ReadLine from a StreamReader, I can keep or disregard each line just by looking at it as a string.
Now though I have a new issue.
There is a column in the valid data that I can use to disregard a whole lot more rows.
If I read the Cleaned file using the CsvParser it's easy to filter by that column.
But, I don't really want to waste writing the rows I don't need to the Clean file.
I'd like to be able to check that Column, while Cleaning the File. But, at that point I'm working with strings representing entire lines. It's not easy to get at the specific column I want.
I can't Split on ',' there may be commas in the text of other columns.
I'm ending up writing the Csv Parsing Logic, that I was using CsvParser for in the first place.
Ideally, I'd like to read in the existing file, clean out the lines that I can based on strings, then somehow parse the resulting seq using the CsvParser.
I see CsvFile can Load from Streams and Readers, but I'm not sure that's much help.
Any suggestions or am I just asking too much? Should I just deal with the extra filtering on loading the Cleaned File?
You can avoid doing most of the work of parsing by using the CsvFile class directly.
The F# Data documentation has some extended examples that show how to do this in some detail.
Skipping over lines at the start of a file is handled by the skipRows parameter. Passing the ignoreErrors parameter will also ignore rows that fail to parse.
open FSharp.Data
let csv = CsvFile.Load(file, skipRows=1, ignoreErrors=true)
for row in csv.Rows do
printfn "%s" row.GetColumn "Name"
If you have to do more complex filtering of rows, a simple approach that doesn't require temporary files is to filter the results of File.ReadLines and pass that to CsvFile.Parse.
The example below skips a six-line prelude, reads in lines until it hits a blank line, uses CsvFile to parse the data, and finally filters the resulting rows to those of interest.
let tableA =
File.ReadLines(file)
|> Seq.skip(6)
|> Seq.takeWhile(fun l -> String.length l > 0)
|> String.concat "\n"
let csv = CsvFile.Parse(tableA)
for row in csv.Rows.Filter(fun row -> row?Close.AsFloat() > row?Open.AsFloat()) do
printfn "%s" row.GetColumn "Name"

selecting specific rows from an unstructured csv file and writing to another file using python

I am trying to iterate through an unstrucutred csv file (it has no specific headings). The file is generated by an instrument. I would need to select specific rows that have specific column values and create another file. Below is the example of the file layout
,success, (row1)
1,2,protocol (row2)
78,f14,34(row3)
,67,34(row4)
,f14,34(row5)
3,f14,56,56(row6)
I need to select all rows with 'fi4' value. Below is the code
import csv
import sys
reader = csv.reader(open('c:/test_file.csv', newline=''), delimiter=',', quotechar='|')
for row in reader:
print(','.join(row))
I am unable to go beyond this point.
You're almost there:
for row in reader:
if row[1] == 'f14':
print(','.join(row))
You just need to check and see whether the row is one you're interested in or not by checking the value of the column and see if it's what you're looking for. That could be done with a simpleif row[1] == 'f14'conditional statement. However that would fail on any blank lines -- which it looks like your input file may have -- so you'd need to preface that check with another to make sure the row had at least that many columns in it.
To create another csv file with just those rows in it, all you'd need to write each row that passed all the checks to another file opened for output -- instead of, or in addition to, printing the row out. Here's a very concise way of just writing the rows to another file.
(Note: I'm not sure why you had thequotechar='|'in your code on thecsv.reader()call because there aren't any quote characters in the input file shown, so I left it out in the code below -- you might need to add it back if indeed that's what it would be if there were any.)
import csv
with open('test_file.csv', newline='') as infile, \
open('test_file_out.csv', 'w', newline='') as outfile:
csv.writer(outfile).writerows(row for row in csv.reader(infile)
if len(row) >= 2 and row[1] == 'f14')
Contents of'test_file_out.csv'file afterwards:
78,f14,34(row3)
,f14,34(row5)
3,f14,56,56(row6)

Resources