Load csv with headers from "https://drive.google.com/open?id=1pyWY81bKzcCF7T-i_-MyhY2kJ4Z8NYP8" as row
with row
return row
this is my code I am trying to access the csv file with 1 million records from my drive using load csv
It is giving me following error:
Neo.DatabaseError.General.UnknownError: At https://drive.google.com/open?id=1pyWY81bKzcCF7T-i_-MyhY2kJ4Z8NYP8 # position 1750 - there's a field starting with a quote and whereas it ends that quote there seems to be characters in that field after that ending quote. That isn't supported. This is what I read: 'docs-fwds":'
I am not getting the issue
can anyone help me solve this?
URL you entered is not the actual path of the file but the link to a page which opens the file from google drive. So the link you provided points to the HTML page and not the CSV file.
If you want actual URL to the file try to download it and copy the URL that appears in the new tab.
You can change your query as follows(updated with actual URL):
Load csv with headers from "https://drive.google.com/uc?id=1pyWY81bKzcCF7T-i_-MyhY2kJ4Z8NYP8" as row
with row
return row
Don't return a row if the file is large, the browser will become
unresponsive.
Related
What you will see from images below is that A1 is filled with random number which generated from the script. The number will change randomly every-time cursor is moved, it's used in method for "forcing update the XML data" in Google Sheets.
as we can see from the 1st picture, the IMPORTXML worked like charm, using =IMPORTXML("Link" &A1(which is the random number, that is needed to update the data), "//target content") recipe
Well, it worked out for the 1st link, but not really for the second one, in the 1st image, B2 is using the last link, and it shows 1736.5 as the value, that is showing fine without using &A1 code
After adding &A1 to the formula, it gives error #N/A and Resource at url not found as the error detail.
I already tried to use another cell with calculated numbers(more than A1 or less than), still gives me that error.
Solution
If you look closely to the second URL you will notice it finishes with an = sign. In URLs this symbol is used to express key values pairs. Using your refresh trick, in this case, you are specifying to the server to look for a resource that actually doesn't exist. Hence the IMPORTXML error. Just put the generated URL in the browser to see the result.
Try to put another random parameter in the URL that will cause to refresh the page without causing a 404 HTTP error.
For example:
https://www.cnbc.com/quotes/?symbol=XAU=&x=0
Won't cause any error and will give the desired result.
I have an HTML table that has seven columns and 3 rows (the number of rows may be more or less). The second columns contains a links to PDF files and the seventh column contains the phrase "Corrective Action" I only want to download the PDF files from the columns that contain the word "Corrective Action". However, my code is only downloading the first PDF.
Here is the code:
http://dev.atriumfinehomes.com/clonewebtable/sample.PNG
This is the table:
http://dev.atriumfinehomes.com/clonewebtable/table.htm
Could I get some help with this please?
Get the links of the PDF files using Extract Table command.
Steps to get the links:
- Edit the Extract Table command -> Advanced view -> Step 6: Extract Selected Tag details to CSV file. Tag Name: Hyperlink, Attribute Name: Get URL.
- Save the data to another CSV file. (You can't save it in the same file as it will append or overwrite).
- Open the file CSV file as a spreadsheet.
Inside the loop
- Create a new variable $vCounter$, because the links.csv file dosn't conation headers as the table.
- Using variable operation assign $Counter$-1 to $vCounter$.
- Using Get Cells command and get cell A$vCounter$ and assign it to a new variable $vPDFURL$.
- Use $vPDFURL$ as Download file URL in the download command.
It's because you only download the 'correctiveaction1.pdf' but the PDF from the line 3 is named 'correctiveaction3.pdf'
I need to scrape or grab Related searches when searching with any keyword.
However, in a red box I mark. iMacros can be scrape with EXTRACT:HTM.
So, I must manually edit to extract them as TXT line by line for each time.
Is there any solution to separate the result to TXT for each before saving to CSV?
This is my code:
TAG POS=1 TYPE=DIV ATTR=CLASS:b_rich EXTRACT=HTM
Thanks
I have a CSV with a list of file names in the first column
filename1
filename2 etc
The actual files are sitting on website.com/< filename >.pdf
How do I write an applet to download all the files from the URL into maybe Dropbox?
Thanks heaps
Its easy in Zapier
create a Zap that is triggered for new rows in a google sheet, and when triggered reads a cell in the row (which is the file url) and uploads it to dropbox.
Once you have set this up you paste your CSV into google sheets and voila!
Got an error when i was clicked import .CSV File
Click Here for see screenshot
Thanks you everybdy !
I received the same error. It turns out in the CSV, there was data in a column that had no header and didn't map to any field I had in my Rails app. It contained comments that I never picked up on.
I had to add a new column to the application called 'comments' and create a new migration based on that.
Then went back into my CSV, added the header of 'comment'. Then it imported properly.
The import methods had found the data, but didn't know what column to add it to since there was no header in the CSV. So it gave an error, it doesn't know what '' is (no header).