Hi I am trying to import the author of blog from url using the query :
=Index(IMPORTXML(A264,"//span[#class='auth-name']"),1)
it works for some urls , but for some needs the data doesnt load , shown Error
Loading data Please suggest on what to do
Got what i tied but not completely
Related
So... I'm new to this. Been trying to teach myself how to program since maybe April. But I've always been tach . So... disclaimer out of the way...
I'm trying to make a Magic the Gathing based app. I'm trying to use Scryfall's database as a backend (so I don't have to catalog all 20,000 cards myself). But I'm running into errors parsing the json.
I've tried following along with Hacking with Swift's video series. I've tried two main ways.
Method 1. Downloading the bulk data, saving it to the project, and parsing it locally.
Method 2. Using URLSession.
Both times I get stuck at the same spot.
if let decodedResponse = try? JSONDecoder().decode(Response.self, from: data)
Somehow that part always fails. It works ONLY if I paste (a very small) part of the json as let json = """ [{ stuff: stuff, more stuff: more stuff}]""" directly into the main .swift file. But any time I either use Bundle.main.path(forResource: " nameOfFile", ofType: "json") or URLSession it completly fails at the decode line.
Theory 1. Scryfall isn't using json that conforms to Codable?
Theory 2. My struct to hold the data isn't "catching" the decoded data correctly.
Scryfall API
hacking with Swift > Codable cheat sheet
hacking with Swift > Sending and receiving Codable data with URLSession and SwiftUI
edit: crosspost to Reddit > iOSDev
Your „Response“-Class probably isn‘t completely correct. You can use something line quicktype to generate the model class.
You can also use a JSON-Validator to validate the json from their site (https://jsonlint.com), but I think there‘s no fault from their side
Also take a look at Error-Handling from JSON-Decoder: Error handling using JSONDecoder in Swift
Without more details, I can‘t give you more help. Try posting a snippet (response model + code) so we can analyse the issue.
I'm currently working on a VBScript that will open multiple URLs in order to update documents on a server. I was wondering if there was a way to parse a webpage's content for a specific string, in this case being the updateResult SUCCESS line shown below:
I need to be able to record the success of this webpage text as opposed to the failure page below:
This is all that is on the webpage. How would I go about parsing the text of both these types of pages in order to know that the document has updated correctly or not?
I'm trying to scrape information for an on project from the Oil and Gas Authority Open data site but my code returns no data
(The website I'm trying to scrape from)
http://data-ogauthority.opendata.arcgis.com/datasets/ab4f6b9519794522aa6ffa6c31617bf8_0?uiTab=table
I have also realized that the site has an API but I do not know how to call an API in rails. If anybody can assist it would be greatly appreciated.
You can get those data using requests module:
import requests
import json
url = 'http://data-ogauthority.opendata.arcgis.com/datasets/ab4f6b9519794522aa6ffa6c31617bf8_0.geojson'
r = requests.get(url)
data = json.loads(r.text)
# here you have the data loaded into a dict
I have a web api that parses a data url(data:image/png;base64,..) in a query parameter. However when I tried saving the data it becomes "data:image/png". Is there a way where I can tell rails to allow the whole data url string?
We need to parse the data from a google reader public rss feed, the problem is that the url parameter n=numerofitemstoretrieve only works up to n=9
For example in our test url:
http://www.google.com/reader/shared/user%2F15926769355350523044%2Flabel%2FPublicas%20RSS?n=2
Retrieves 2 news items
http://www.google.com/reader/shared/user%2F15926769355350523044%2Flabel%2FPublicas%20RSS?n=20
Retrieves only 9 news items
How can we overcome this limitation? Is there another parameter for this case? Or another method?
We found that using this alternative url the n parameter works fine:
https://www.google.com/reader/api/0/stream/contents/feed/http://www.google.com/reader/public/atom/user%2F15926769355350523044%2Flabel%2FPublicas%20RSS?n=20
The only problem is the output format its different this way, so if someone finds a better solution we will grant the response to him/her
It seems the results are cropped only when the url is viewed in the browser...if you get the web contents from code it returns the correct item count...(in contrast using the alternative url the returned contents are right both ways: getting them from code as well as viewing it in the browser)
In Atom format (link in the top right in the two urls in the OP) :
http://www.google.com/reader/public/atom/user%2F15926769355350523044%2Flabel%2FPublicas%20RSS?n=20
The content with /api/ in the URL in the second post is in JSON format, slightly harder to parse than the Atom XML.
https://webapps.stackexchange.com/questions/26567/how-to-raise-google-reader-rss-feed-entry-limit