I want to get the price of an item which is market in steam. I tried to use this formula but it is not working it tells me that the value is too big. and I did not know what to do. I want to get the price of an item which is on market on steam.
Blockquote =VALUE(REGEXEXTRACT(REGEXEXTRACT(CONCATENATE(IMPORTXML("https://steamcommunity.com/market/listings/730/Clutch%20Case", "//script[2]")),".*]]"), "[0-9]+.[0-9]+"))
The main problem here is that the prices in the Steam page are generated by Javascript and IMPORTXML cannot retrieve dynamically generated data. It seems that you're trying to get around this by importing a <script> section, but this will not execute the script, you're just grabbing a bunch of code.
According to this answer, Steam has some endpoints that you can use to get market data. These return a simple JSON string with the item information. The endpoint looks like this:
http://steamcommunity.com/market/priceoverview/?currency=1&appid=[ID]&market_hash_name=[Item name]
The appid is the game's ID, and the market_hash_name is the URL-encoded name of the item. Conveniently you can already find these in the URL that you are already using, https://steamcommunity.com/market/listings/730/Clutch%20Case. The game ID is 730 and the name is Clutch%20Case. So you can plug these in to the endpoint to get this URL:
http://steamcommunity.com/market/priceoverview/?currency=1&appid=730&market_hash_name=Clutch%20Case
The endpoint's JSON looks like this:
{
"success":true,
"lowest_price":"$0.30",
"volume":"94,440",
"median_price":"$0.31"
}
Since you only care about the median price, we can use a formula with REGEXEXTRACT to extract only that part:
Here's a sample pasting the URL in A1.
=REGEXEXTRACT(JOIN("", IMPORTDATA(A1)), "median_price:""(\$[0-9]+.[0-9]+)")
Edit: As mentioned in the answer I linked, you can test the currency parameter in the URL with different numbers to get other currencies. In your case you can try currency=2for pounds (£). You'll also have to edit the REGEXEXTRACT to account for this change:
URL: http://steamcommunity.com/market/priceoverview/?currency=2&appid=730&market_hash_name=Clutch%20Case
Formula: =REGEXEXTRACT(JOIN("", IMPORTDATA(A1)), "median_price:""(£[0-9]+.[0-9]+)")
Related
I'm trying to extract prices from bookdepositary site in local currency. However, it always retrieves the USD prices no matter what I am trying.
A specific example is:
=IMPORTXML("https://www.bookdepository.com/1/9783836519885";"//span[#class='sale-price']";"bg-BG")
gives US$47.63 no matter that Google sheet settings are changed to Bulgarian and despite of the locale set to "bg-BG".
The same US$47.63 result is retrieved when I use another scrap method like:
=IMPORTXML("https://www.bookdepository.com/1/9783836519885";"//meta[#itemprop='price']/#content";"bg-BG")
The following does not retrieve any result (but this is a secondary problem I am investigating which will follow once I understand the locale problem):
=IMPORTXML("https://www.bookdepository.com/1/9783836519885";"/html/body/div[2]/div[6]/div[3]/div/div[1]/div[1]/div[3]/div/div[2]/div/div[3]/div/span[1]";"bg-BG")
What do you think - is there a workaround?
I don't think that can be done with importXML().
The function help page seems to be missing the locale parameter, but the formula editor inline help box tells the following:
locale: A language and region locale code to use when parsing the data. If unspecified, the document locale will be used.
The importXML() function only finds data that actually appears in the XML document. The endpoint you mention seems to adjust its content per the client's IP address, but in each response, it only has prices in one currency.
The locale parameter does not change the IP address the request is sent from. It gets sent from one of Google's servers, most of which are in the United States. When you set the locale parameter, the content may get parsed in a different way, but that will not magically make additional content appear in the page.
in this case, what you actually need is to fool google sheet to not default out on the "common path"
you need something like: https://www.4everproxy.com/ (but with Bulgarian support)
here is some example...
where:
and then the formula will be from:
=IMPORTXML("https://de.4everproxy.com/direct/aHR0cHM6Ly93d3cuYm9va2RlcG9zaXRvcnkuY29tLzEvOTc4MzgzNjUxOTg4NQ--","//span[#class='sale-price']")
I'm using the following function go grab some stock data from a website
=index(split(index(IMPORTHTML("https://finance.yahoo.com/quote/"&A27,"Table", "2"),6,2)," "),1)
It works perfectly for every stock ticker except one, where it gives "Error: resource at url not found"
I double checked and the ticker name is correct in the A column, the link works if I write it in like that manually. The yahoo page with TGH ticker does contain the info I need and exactly the same way as any other ticker...I'm just lost on why it doesn't work in this single case.
See pic below:
GoogleSheets
I have a database of elements, each element has its own QR Code. After reading the code I would like to be able to open the worksheet on a specific tab and jump to the appropriate cell (according to the element name). Calling a worksheet through a URL with the #gid parameter allows you to open a tab.... the "range" parameter allows you to jump to a specific cell.... and what if I want to search for an item by name? Something like: https://docs.google.com/spreadsheets/d/1fER4x1p.../edit#gid=82420100&search=element_name.... is it possible?
Google has not introduced this yet
But you can look into Google Script (Googles SpreadSheets macros like) to achieve this.
Also a simpler approach will be to just filter the data, but this will change your requirement obviously. For example you can create a Filter with the name you are looking for and then you will get the URL.
This is the URL to a Sample of this, it should open the
Spreadsheet and filter the data when loaded. This is the Icon to
look for to create the filters
here is some documentation for you to get started on Google App Script, but I don't have a direct link to let you know how to catch the parameters for it to process them. What I can tell you is that this is a much more complicated approach than just a URL because it involves programmatic processing on the Spreadsheet side.
I have created a report where I created one calculated column which holds the value of dynamic URL. It has ID as parameter and I want to slice the data based upon that after publishing. When I am publishing this report to powerbi.com and I am using this URL to filter out the data, it shows me all the data. The filters through URL is not working.
I just went through a blog and when publishing through query string parameter it says that it has a limitation that it doesn't work when it will be published to web. What does it mean?
Below is the calculated column:
https://app.powerbi.com/groups/ce347380-637d-4700-838f-f7b00294256c/reports/374c3b7b-18f0-46f6-b5ec-2c97cbb01611/ReportSection?filter=Append1/Append1[SIMPrjReqID] eq '"&Append1[SIMPrjReqID]&"'
where Append1 is table and SIMPrjReqID is a column on which I want to filter out the data dynamically.
Please advise!
it has a limitation that it doesn't work when it will be published to web. What does it mean?
This means that passing URL query string parameters to filter data works only when applied on the report's URL, as you see it in the browser's address bar when you open it in powerbi.com, and not on the URL that you get when you use Publish to web option to make it public:
This filter does not work, because you didn't specified the field name correctly:
?filter=Append1/Append1[SIMPrjReqID] eq '"&Append1[SIMPrjReqID]&"'
As noted in the official documentation, the filter is passed in this format URL?filter=Table/Field eq 'value', where Field is the name of the field. So your query string parameter should look like this:
?filter=Append1/SIMPrjReqID eq '"&Append1[SIMPrjReqID]&"'
I am using the Twitter Search API and I can't understand the id field of a tweet.
For example here is one: <id>tag:search.twitter.com,2005:1990561514</id>. The real ID is the final number part, right? Why doesn't Twitter already provide this in a single element? And, why is there a year of 2005on the ID field? Is that the ID of that year and the following year tweets get an ID recounted to zero? Is the ID indexed to the year?
I am asking all this stuff, because I am going to use the option of since_id to retrive new tweets. If the ID isn't really unique and depends on the year, it won't work as expected.
Thanks.
The tag is unique - but parts of it are redundant.
tag:search.twitter.com,2005:1990561514
Obviously, search.twitter.com is the URL from where you requested the document.
The ,2005 is constant. As far as I can tell, it has never changed since the service was launched. While there's no official documentation, I would guess that it refers to the ATOM specification namespace - http://www.w3.org/2005/Atom"
Finally, the long number is the Tweet's status ID. It will always be unique and can be used for the since_id.
What you will need to do is split the string, and just use the number after the colon as your ID.
I believe you are doing something wrong. If you look at all of the example results from the Twitter Search API, none of the id fields are formatted like this one you are showing.
For example:
http://search.twitter.com/search.json?q=%40twitterapi%20-via
Also, if you check out the example requests page, you will see that all of the id fields have normal formats, i.e.:
"id":122032448266698752
Update:
Now that I know you are using the atom feed, I can see where the seemingly oddly formatted element comes from. See this article on avoiding duplicates in atom feeds. Another helpful article.
Basically, atom feeds REQUIRE a unique id for each element in a feed. Some feeds use the "tag" scheme to ensure uniqueness. This format is actually pretty common in atom feeds and many frameworks use it by default. For instance, the RoR AtomFeedHelper (which might even be what Twitter uses) specifies the default format to be:
"tag:#{request.host},#{options}:#{request.fullpath.split(".")}"