I have array reference like this
a1 = [["http://ww.amazon.com"],["failed"]]
When i write it to csv file it is written like
["http://ww.amazon.com"]
["failed"]
But i want to write like
http://ww.amazon.com failed
First you need to flatten the array a1
b1 = a1.flatten # => ["http://ww.amazon.com", "failed"]
Then you need to generate the CSV by passing every row (array) to the following csv variable:
require 'csv'
csv_string = CSV.generate({:col_sep => "\t"}) do |csv|
csv << b1
end
:col_sep =>"\t" is used to insert a tab separator in each row.
Change the value of :col_sep => "," for using comma.
Finally you have the csv_string containing the correct form of the csv
Ruby's built-in CSV class is your starting point. From the documentation for writing to a CSV file:
CSV.open("path/to/file.csv", "wb") do |csv|
csv << ["row", "of", "CSV", "data"]
csv << ["another", "row"]
# ...
end
For your code, simply flatten your array:
[['a'], ['b']].flatten # => ["a", "b"]
Then you can assign it to the parameter of the block (csv) which will cause the array to be written to the file:
require 'csv'
CSV.open('file.csv', 'wb') do |csv|
csv << [["row"], ["of"], ["CSV"], ["data"]].flatten
end
Saving and running that creates "file.csv", which contains:
row,of,CSV,data
Your question is written in such a way that it sounds like you're trying to generate the CSV file by hand, rather than rely on a class designed for that particular task. On the surface, creating a CSV seems easy, however it has nasty corner cases and issues to be handled when a string contains spaces and the quoting character used to delimit strings. A well-tested, pre-written class can save you a lot of time writing and debugging code, or save you from having to explain to a customer or manager why your data won't load correctly into a database.
But that leaves the question, why does your array contain sub-arrays? Usually that happens because you're doing something wrong as you gather the elements, and makes me think your question should really be about how do you avoid doing that. (It's called an XY problem.)
Related
I have a piece of code in Ruby which essentially adds multiple lines into a csv through the use of
csv_out << listX
I have both a header that I supply in the **options and regular data.
And I am having a problem when I try to view the CSV, mainly that all the values are in one row and it looks to me that any software fails to recognize '\n' as a line separator.
Input example:
Make, Year, Mileage\n,Ford,2019,10000\nAudi, 2000, 100000
Output dimensions:
8x1 table
Desired dimensions:
3x3 table
Any idea of how to go around that? Either by replacing '\n' with something or using something else than csv.generate
csv = CSV.generate(encoding: 'UTF=8') do |csv_out|
csv_out << headers
data.each do |row|
csv_out << row.values
end
The problem seems to be the data.each part. Assuming that data holds the string you have posted, this loop is executed only once, and the string is written into a single row.
You have to loop over the indivdual pieces of data, for instance with
data.split("\n").each
I have one xlsx file having 3 sheets.I want to import data of each sheet into different tables.Please Help me into this.
Is it possible to convert it to csv?
If possible, convert this xlsx file to 3 separated csv files (e.g field delimeter = , text delimiter = " ).
Then, open your rails console (or create a .rb script), read each file with the CSV class and save the data on the table. Jump the first line if you have a header (drop(1)).
Example:
require 'csv'
CSV.foreach("sheet1.csv").drop(1) do |row|
YourTable.create!({
field_a: row[0],
field_b: row[1],
field_c: row[2]
})
end
PS: I don't know why people are downvoting this question. SO is suppose to be a place where programmers seek help. The downvoters could at least explain why they are down voting.
I have inherited a ruby app which connects to a mongodb. I have no idea about mongo or ruby unfortunately so im on a rapid googling and learning curve.
The app stores placenames as well as their lat longs, alternative name, peoples memories, and comments. It also counts how many times a place has been discussed.
The following rake file when run, grabs all the locations from the mongodb and creates a csv,spitting out one line for each location with the user, number of times mentioned, the memories etc etc.
task :data_dump => :environment do
File.open("results.csv","w") do |file|
Location.all.each_with_index do |l,index|
puts "done #{index}"
file.puts [l.id, l.classification_count, l.position, l.created_at, l.classifications.collect{|c| c.text}, l.classifications.collect{|c| c.alternative_names }.flatten.join(";"), l.classifications.collect{|c| c.comment }.flatten.join(";"), l.memories.collect{|m| m.text}.flatten.join(";") ].join(",")
end
end
end
It works great and generates a CSV I can then pull into other programmes. The problem is that the content contains plain text fields which breaks the validity of the csv with line breaks etc and I want to make sure all plain text fields are properly enclosed within the CSV.
So if I can understand the above query better, I can then input the correct field enclosures to ensure the csv is valid when loaded into GIS software.
Also the above takes about an hour 45 to run on my laptop so I want to find out if it is the most efficient way to do the query.
To date we have around 300000 placename listed and this is going to rise to a few million so will only get slower.
You can generate the CSV with Ruby's 'csv' module:
require 'csv'
task :data_dump => :environment do
CSV.open("results.csv","w") do |csv|
Location.all.each_with_index do |l,index|
puts "done #{index}"
csv << [l.id, l.classification_count, ...]
end
end
end
This will ensure that the CSV is generated properly. As for the speed, I've only used ActiveRecord with relational databases, but I imaging the problem is the same - The 1 + N Problem. Basically it says that each time you are using l.classifications.collect or l.memories.collect it needs to do a query to get all the classifications/memories from the database. The solution is eager loading:
require 'csv'
task :data_dump => :environment do
CSV.open("results.csv","w") do |csv|
Location.all.includes(:classifications, :memories).each_with_index do |l,index|
puts "done #{index}"
csv << [l.id, l.classification_count, l.position, l.created_at, l.classifications.collect{|c| c.text}, l.classifications.collect{|c| c.alternative_names }.flatten.join(";"), l.classifications.collect{|c| c.comment }.flatten.join(";"), l.memories.collect{|m| m.text}.flatten.join(";") ]
end
end
end
(and you might need to do so for alternative_names - I don't remember the syntax for nested eager loading). This will make a single query to the database, which should be much faster.
I have a CSV file like:
Header: 1,2,3,4
content: a,b,c,d,e
a,b,c,d
a,b
a,b,c,d,d
Is there any CSV method that I can use to easily validate the column consistency instead of
parsing the CSV line by line?
One way or another the whole file has to be read.
Here is a relative simple way. First the file is read and converted to an array which is then mapped to another array based on length (number of fields per row). This array is the checked if the length is always the same.
If you'd hate to read the file twice you could remember the length of the header and while you parse the file check each record if it has the same number of fields and otherwise trow an exeption.
require "csv"
def valid? file
a = CSV.read(file).map { |e|e.length }
a.min == a.max
end
p valid?("data.csv")
csv_validator gem would be helpful here.
Have do i display and extract data from a feed url?
And I only of the interest to import/display those there have the catogory_id of 10
This is the feed url:
http://www.euroads.dk/system/api.php?username=x&password=x&function=campaign_feed&eatrackid=13614&version=5
The format on the feed is:
campaignid;;advertid;;title;;startdate;;enddate;;amountleft;;price;;percent;;campaigntype;;targetage;;targetsex;;category;;category_id;;cpc;;advert_type;;advert_title;;bannerwidth;;bannerheight;;textlink_length;;textlink_text;;advert_url;;advert_image;;advert_code;;campaign_teaser;;reward/cashback;;SEM;;SEM restrictions
Here is a sample code of the feed:
campaignid;;advertid;;title;;startdate;;enddate;;amountleft;;price;;percent;;campaigntype;;
targetage;;targetsex;;category;;category_id;;cpc;;advert_type;;advert_title;;bannerwidth;;bannerheight;;textlink_length;;textlink_text;;advert_url;;advert_image;;advert_code;;campaign_teaser;;reward/cashback;;SEM;;SEM restrictions <br/> <br/> 2603;;377553;;MP3 afspiller;;2010-07-21;;2011-12-31;;-1;;67,00;;;;Lead kampagne;;Over 18;;Alle;;Elektronik;Musik, film & spil;;7,13;;0,97;;Banner;;;;930;;180;;0;;;;http://tracking.euroads.dk/system<br/> <br/> /tracking.php?sid=1&cpid=2603&adid=377553&acid=4123&eatrackid=13614;;http://banner.euroads.dk/banner/1/2603/banner_21153.gif;;;;http://banner.euroads.dk/banner/1/2603/teaserbanner_1617.gif;;Allowed;;
The data format looks like a variation on CSV, if ';;' is used as a column separator. Based on that:
require 'csv'
CSV.parse(data, :col_sep => ';;') do |csv|
# do something with each record
end
data will be the content you receive.
Inside the loop, csv will be an array containing each record's fields. The first time through the loop will be the headers and subsequent times through csv will be the data records.
Sometimes you'll see ';;;;', which means there's an empty field; For instance, field;;;;field which would convert to ['field',nil,'field'] in csv. You'll need to figure out what you want to do with nil records. I'd suggest you'll probably want to map those to empty strings ('').