I am currently working with Google Script which accepts the user input and store in the other spreadsheet. This logic works ok but to facilitate this, I have created a form like provision on my worksheet and placed some images which look like buttons. When I (and other collaborators) open this spreadsheet, we often observe that images are not on their place as shown below,
Note: See above, buttons (images) 'Get Case Details' & 'Deletegate Tasks' are not on their original positions. Ideally, they should appear as below,
As a workaround, I just go to some other tab/worksheet and come back to mine which shows the image location correctly.
I checked this discussion on Google Doc forum but looks like a known issue, no answer yet.
Does anyone have any idea? Has anyone come across this problem?
Besides the discussion that you linked, there a lot of other similar reports over the years on the Google Docs Help Forum and other places on the web.
One alternative is to increase the whitespace around the buttons. Another alternative is to use another UI element like custom menus, a side panel or dialogs.
Related
longtime lurker, first-time poster. I usually solve my issues & upvote without needing to post, but I've been stumped all weekend!
Edit: Erik solved it:
I was looking for an answer to extract the "datePublished" or "dateModified" from a Substack article in a Google Sheet.
Goal: This will tell me when it was the last date/time I updated, for example, my PS5 restock guide, my Walmart PS5 restock guide, etc. If it's too stale, I try to add relevant information. Having it in Google Sheets makes it streamlined as there are dozens of guides.
Test Google Sheet:
https://docs.google.com/spreadsheets/d/1hLBFMWCTc2hpC-1C8Sxd5OVREdNHTVTtrJsAAU5Jl94/edit#gid=0
I've done this before for other sites I've worked at, but there appears to be no date in the meta data on Substack :/ (I could be wrong, as I'm no expert at reading XPATH)
I do see this in the body for the linked example:
<time datetime="2022-07-29T11:52:00.000Z">Jul 29</time>
I've been trying things like this (where E17 is where I put the article URL in Google Sheets) to no effect.
=REGEXEXTRACT(IMPORTXML(E17, "//time[#datetime='datePublished']/#content"), "(.+)T")
I've been mostly working off of this StackOverflow solution, but I haven't been able to apply the same finding to Substack's formatting.
If you want to grab it directly using a Google Sheets formula, this should work for you:
=ArrayFormula(IFERROR(VLOOKUP("*",FLATTEN(IFERROR(REGEXEXTRACT(IMPORTXML("https://www.theshortcut.com/p/ps5-restock","//div[2]"),"Swider(.?.?.?.?\d\d{1}[hrago\s]*)"))),1,FALSE),"???"))
To set realistic expectations, I usually can't invest this much time into working out such a solution on this forum. But I'm on vacation at the moment and filling time while my guest is otherwise occupied.
One further note: this is specific to the two sites you gave as examples. It will only work for sites where the second <div> holds this information and only where the data exists as strings exactly like those found on these two sites (including the poster's last name as "Swider").
ADDENDUM:
Looking at this further, did you try simply the following?
=IMPORTXML(C2, "//time")
(assuming your URL is in C2, etc.)
This seems to work for me, given that it appears the date/time data you want is contained within the first <time> element on the web page.
I have been using Platypus & Reportlab for several weeks using Python, and would caveat this by saying that I am definitely a beginner, and my code isn't "good" code, but ...
I tend to work by looking at an example, testing it, and then adapting it to my needs...
With this approach, I managed to get a table of contents to work.
I also wanted to have a Page x of y working, which again, I found code, and after a lot of hassle, managed to get it to work with my Table of Contents, which I thought I then understood more about the applications, but ...
I had experienced links working - or not working separate to the ToCs.
However, when I merged my samples for ToC and Page x of y, I have a wonderful Toc, with links for each topic I wanted - but, the links all go to the top of the document.
I have looked at other examples I have tried, and find some where links using <a href="#MYANCHOR"... and <a name="MYANCHOR"... have the same issue.
I have also added into my main code, a link using the <a href=... but using one of the link destinations that a ToC would use - and this again jumps to the top of the document.
I put all the elements that form the document into a list called e.g. element so I would have code such as element.append(PageBreak()) and then I can print out all the element list to see what is there, and compare it to examples where it doesn't work, and I can see no significant difference.
If I provide an external link to a website (e.g. that excellent stackoverflow.com) those links work, but internal ones don't - which I accept are handled differently, but I hope it indicates where my failures lie!
I would love to understand why the links are so fickle, as I would like to get links to work in a table, and from a drawing, which to my mind should be possible, ... which may just highlight my ignorance - for which I apologise...
Any help would be really appreciated...
Many thanks,
First of all, I'm completely incompetent and my hours-long attempts at trying to make this work have been fruitless. So, please, there's someone that can help me.
I have
table id="..........." tablesorter class="........"
They are in the same line of code ad I'm able to scrape until the first element. For me it's important to scrape by the second one. I'm tryng different way but nothing
investing
In the image, in the part highlighted on the left where there is the drop-down menu, it's possible to select the different American markets (Nasdaq, DowJones,
S&P500 etc.). When I select a market other than DowJones, the URL of the page always remains the same, while the part that I highlighted on the right changes (tablesorter class = "............").
In my sheet, I've done this but it can't allow me to scrape different market (only the default table thay you see when open the webpage)
spreadsheet
Your main problem is that IMPORTXML can only retrieve information from static content in websites. Therefore, any content inserted dynamically can't be retrieved by this function.
In your case, you can check what content is not static by heading over to the website https://it.investing.com/equities/americas and then disabling JavaScript on it. To do so if you are using Chrome please follow this guide.
As Javascript will add dynamic content to the site, when you disable it you will observe that the information subject to change with the dropdown doesn't actually change which means that it was dynamically inserted and therefore can't be accessed by IMPORTXML. I have attached an image below showing this.
As a workaround to this you will need to use other web scraping techniques.
When I log into icloud.com and go to Numbers, Pages, or Keynote, I'm shown a bunch of tips on how to get started and it points or highlights over different areas of the page. I can then toggle it off. How is that done and how can I implement something like that on my web site?
These are called "coach marks", similar to chalk talk in sports.
You can find a pattern gallery here. You will find this question and answer helpful: How do I create a help overlay like you see in a few Android apps and ICS?
OK, so I've set up a website where the content is split into modals that are hidden. There are links on the page that when clicked on, the relevant modal appears. I want to be able to track the links being clicked on so I can see what content is being viewed by users. Ideally I want the data to appear as fake pageviews. I know this used to be possible but not sure how to do it nowadays.
I can't seem to find any decent up to date documentation online for how to do this. Can anyone shed some light?
Once you have the Analytics initialized for asynchrounous, just call
_gaq.push(['_trackPageview', 'FAKE_URL']);
This should work and will not slow down your page load. You might want to consider using "events" rather than fake page views; it's also quite simple
_gaq.push(['_trackEvent', "EVENT NAME", "PARAMETER"]);
See Google's Documentation for more info.