After adding faq schema to my blog.
There was a notification from Google search console saying "unparsable structured data" with this link
"https://www.chikasom.com/2020/12/nigerian-defence-academy-kaduna-list.html?m=1"
and that the faq schema implemented on the Blogger site is yet to be indexed.
Even shema validation website and Google rich test result proves that there is an error.
I am not sure but I feel the problem may be from these characters "”" and I tried removing them but I couldn't.
{
"#context”:
I didn't see any result since I was unable to remove the characters "”" and I am not sure whether it's the problem
Related
My site is https://www.wilfredamaz.com/ In the Google console live test it says "URL is available to Google If it gets indexed and selected as canonical, it could appear in Google Search results with all relevant enhancements. It says further under coverage - Indexed, not submitted in the sitemap. Then it goes on to say - the URL will be indexed only if certain conditions are met from there onwards you must familiar with the Google conditions. It is saying something about "duplicate URL". What does it mean? Is there a solution to solve this problem? Indexing is very much overdue. It was first submitted to Google on the 5th of May and last submitted on the 12th of July. Tried to solve it by adding a code to the htaccess file. Nothing happened. Please help.
We have a relatively large website and by looking at Google Search Console we have found a lot of strange errors. By lot, I mean 199 URLs give 404 reponse.
My problem is that I don't think these URLs can be found on any of our pages, even though we have a lot of dynamically generated content.
Because of this, I wonder if these are URLs the crawler found or requests coming to our site? Like mysite.com/foobar, which obviously would return 404.
Google reports all backlinks to your website that deliver a 404 in the Google Search Console, no matter if there has ever been a webpage with that URL in the past.
When you click on an URL in the pages with an error list, a pop-up window will give you details. There is a tab "Linked from" listing all (external) links to that page.
(Some of the entries can be outdated. But if these links still exist, try to get them updated or set up redirects for them. The goal in the end is to improve the user experience.)
Can someone confirm it for me?
I'm helping someone with the importHTML problem on Google spreadsheet. I'm not familiar with importHTML but I thought it should work.
=importhtml("http://www.stockq.org/","table",1)
I don't care which table I'm importing so long as it imports something. It's giving out error message Error: Could not fetch url: http://www.stockq.org/. But the web site is accessible in my browser. That's really bizarre.
My Google Spreadsheet can't cope with the Chinese characters but numbers recognisable by me on the web page are happily imported, as least for the middle table of the three, with:
=importhtml("http://www.stockq.org/","table",A12)
This is much what was I think mentioned by #DigitalSeraphim way back in September. To quote from an answer that was deleted (as not an answer?):
So, I have been building a page to help me keep up with mod updates for my minecraft server, using importxml heavily. I have found that I get the same error for some sites that load absolutely fine in the browser. Looking into it further, I found that the sites are reporting a 404 error, but actually returning the data requested. According to https://drupal.stackexchange.com/questions/110651/how-to-show-a-node-but-return-http-404-response, this is used to remove pages from search engines, as I had assumed. I don't think there is any way around this without some hackery... namely, setting up a "proxy" server that would "fix" the status.
However, it appears that the example you gave is now working, so maybe give it another try.
TL;DR
Use IMPORTXML with XPaths.
I encountered similar problem where I tried to switch between http and https. The work around worked occasionally but the result is not consistent (either way failed a lot).
Later I noticed there is another API named IMPORTXML (XML, not HTML here). With this one you can actually query the content from the same URL and apply XPath instead.
Therefore I would suggest to switch to use IMPORTXML. For example, the following formula
=IMPORTXML("http://www.stockq.org/index/IBOV.php", "//table[#class='indexpagetable']")
will give you all the tables that have class indexpagetable from the page of the given URL.
Note the XPath is slightly different in the spreadsheet, you can refer to the documents for more specifics.
(As a preface, SurveyMonkey requests that questions to their developers be posted to SO with the tag 'surveymonkey')
We've been calling the SurveyMonkey API without problem until yesterday. As of yesterday, the response we're getting back seems to contain an invalid JSON string. The problem seems to lie in the response object containing several unescaped double-quote characters. Below is an example of the response we're getting when calling get_survey_details:
"heading": "Please click "Next" below to proceed.\r\n"
As you can see, there are two unescaped double-quotes ("Next"), which is resulting in an invalid JSON object error.
To verify that it isn't a problem with our code, I made the same API call via terminal (curl), and got the same response.
Reproducing the issue is as easy as creating a survey with double-quotes in a heading, question text, or answer text, and then calling get_survey_details.
Seeing as this has been working fine up until yesterday, I'm wondering if something has recently changed on the SurveyMonkey end of things? We're dead in the water until this gets resolved.
Thanks!
Per the developers, this was an issue on their end. As of yesterday, they had pushed a fix which resolves this issue.
Is there any ruby gem/ rails plugin available for parsing the resume and importing that information into an object/form ?
I may be wrong, but I don't think you'll find anything completely automated to do this, because a résumé (or CV) can be structured in so many different ways and can contain very different types of data. Any completely automated solution is likely to have accuracy problems, since it is technically a difficult problem to solve.
You may find this answer useful.
Here are some other suggestions that might help :-
Require a user to enter their details into a form on your website instead of uploading a Word document. You'll then be able to explicitly ask for the data you want and you'll be able to store the data in a structure that suits you. However, this may be too much of a barrier to entry for your users.
Allow a user to submit the URL of their résumé published using the hResume microformat. Sites like LinkedIn already publish résumés in this format. There is a Ruby gem mofo which can parse microformats including hResumes. However, not all users will have an on-line résumé like this.