Microsoft Graph: NextLink...what about previous link? - microsoft-graph-api

Microsoft graph will provide you with “#odata.nextLink”.
How can I get the “previousLink”. ?

This is a design choice, there is no previous link. When you are enumerating a collection, you always page the collection from the beginning and until the end or until you've found the item you are looking for. If you need a different ordering for various reasons, you should leverage the $orderby command, but those two capabilities (paging and ordering) should be considered distinct, they do not serve the same purpose

I just wanted to give my two cents here and say I am just as confused as you are as to why there is no previous page link.
It's a standard behavior in paginated lists to be able to go forward and back. This really makes no sense at all.

Related

How do I "bundle" my Google Analytics websites for /language/en?

I have a website which has only one language, English.
In Google Analytics, I get two different types of URLs, which makes my results harder to analyse:
"/screen/page/obd-ii-pid-examples/language/en"
"/screen/page/obd-ii-pid-examples"
I would like to somehow "aggregate"/bundle these together so e.g. #hits becomes the sum of the two types across my various URLs.
Is this possible somehow?
Best,
Martin
Easiest thing to do would be to apply some filters to the view you are using
https://support.google.com/analytics/answer/1033162?hl=en
A search and replace filter on /language/en would do the trick.

Pros and Cons of using hierarchical URLs versus flat?

I'm building a large news site and we'll have several thousand articles. So far we have over 20,000. We plan on having a main menu which contains links which will display articles based on those criteria. Therefore, clicking "baking" will show all articles related to "baking", and "baking/cakes" will show everything related to cakes.
Right now, we're weighing whether or not to use hierarchical URLs for each article. If I'm on the "baking/cakes" page, and I click an article that says "Chocolate Raspberry Cake", would it be best to put that article at a specific, hierarchical URL like this:
website.com/baking/cakes/chocolate-raspberry-cake
or a generic, flat one like this:
website.com/articles/chocolate-raspberry-cake
What are the pros and cons of doing each? I can think of cases for each approach, but I'm wondering what you think.
Thanks!
It really depends on the structure of your site. There's no one correct answer for every site.
That being said, here's my recommendation for a news site: instead of embedding the category in the URL, embed the date. For example: website.com/article/2016/11/18/chocolate-raspberry-cake or even website.com/2016/11/18/chocolate-raspberry-cake. This allows you to write about Chocolate Raspberry Cake more than once, as long as you don't do it on the same day. When I'm browsing news I find it helpful to identify the date an article was written as quickly as possible; embedding it in the URL is very helpful.
Hierarchical URLs based on categories lock you into a single category for each article, which may be too limiting. There may be articles which fit multiple categories. If you've set up your site to require each article to have a single primary category, then this may not be an issue for you.
Hierarchical URLs based on categories can also be problematic if any of the categories ever change. For example, in the case of typos, changes to pluralization, a new term coming into vogue and replacing an existing term, or even just a change in wording (e.g. "baking" could become "baked goods"). The terms as they existed when you created the article will be forever immortalized in your URL structure, unless you retroactively change them all (invalidating old links, so make sure to use Drupal's Redirect module).
If embedding the date in the URL is not an option, then my second choice would be the flat URL structure because it will give you URLs which are shorter and easier to remember. I would recommend using "article" instead of "articles" in the URL because it saves you a character.

Umbraco 7 Limiting possible Tags values

On our website, it is possible to tag content by a country list. This country list could be implemented as a tag control but I'm concerned about mis-spellings creeping in over time. However, the country list is very long (150+) so not ideal for a dropdown multiple control either.
What I'm looking to do is have a control that has the same type + autocomplete functionality as the existing tags control but limit the possible values to those retrieved from a database table.
I also want to be able to list all tags that a piece of content has been tagged against as well as searching for content based on tags e.g. GetNodesWithTags
Has anyone developed anything like this before? I've had a look at packages etc but can't see anything similar. Does anyone have any advice before I start off?
Definitely, using Tags datatype for this may cause a lot of problems :)
In my opinion, the perfect solution will be to use nuPickers (https://our.umbraco.org/projects/backoffice-extensions/nupickers/) package and available there TypeaheadList Picker.
Depending of your additional requirements, you may use Lucene index / C# accessed source (totally custom - db, static, enum etc.) / XML file source as a prevalues for your control.
Then, you'll be able to create logic which will enable you to perform search based on this field as it will be a typical property with value on the nodes. Once again - suggested way is to use Lucene Examine index as it's tailored to be fast with searching. You can read more about searching with Examine here: https://our.umbraco.org/documentation/reference/searching/examine/.
Hopefully it will solve your problem.

view search queries by popuarity containing keyword

I have been spending some time watching the search queries that bring people to my site on google analytics recently, in order to see if people are finding exacty what they are looking for and if not creating that new content. But i figured an easier way would be to what search queries are popular. But containing a keyword that relates to my site.
for example, i want to see all the most queried search terms that contain "in japanese".
like "dog in japanese", "i love you in japanese"
I have found http://www.google.com/trends/
but after playing with it for a while it doesnt seem like i can do this. seems like i can just see popularity of spesific queries. I dont want to see how popular specific queries are, i want to see what queries containing x are popular. Anywhere i can do this?
If you join the Google AdWords program, you can use the Keyword Planner tool to try out keywords and immediately get the number of searches per month in a chosen geography. This is a very interesting tool. See http://adwords.google.com.
I'm not sure this question belongs here on SO though.

Search Engine without crawling?

Is there a way to collect web content in order to use it in a search engine without passing by the web crawling phase? Any alternative to web crawling?
Thanks
No, to collect the content you have to...collect the content. :-)
Yes (and sort-of no).
:)
You can download existing data dumps from various websites (wikipedia, stackoverflow, etc.) and construct a partial index that way. It obviously won't be a complete index of the internet.
You could also use meta-search to construct your search engine. This is where you use the APIs of other search engines and use THEIR search results as the basis of your index. Examples include citosearch and opensearch. duckduckgo uses yahoo's boss api (and now yahoo uses bing...) as part of their search engine.
There are also real-time streaming APIs that you could use instead of crawling the web. Look at datasift as an example. There are lots more resources you could cleverly use and avoid/minimize crawling.
If you want to be updated with the latest content on pages, then you can use something like pubsubhubbub protocol to get push notifications for subscribed links.
Or use paid services like superfeedr that make use of the same protocol.
directly or indirectly you have to crawl the web in order to get the content.
Well if you don't want to crawl, you can follow a wiki-like approach, where users can submit links to sites (with title, description and tags). So a collaborative link collection can be built.
To avoid spam a +/- system can be involved, to vote useful sites or tags up and useless ones down.
To avoid spammers mass voting SERPs you can weight votes by user reputation.
User reputation can be gained by submitting useful sites. Or somehow tracing usage patterns.
And considering other abuse patterns too.
Well, you got the point, I think.
As spammers gradually discover weaknesses of traditional search engines (see Google bomb, content scraper sites, etc.), a community based approach may work. But it would suffer severely from the cold start effect, and when community is small the system is easy to abuse and poison...
At least Wikipedia and Stack Exchange is not spammed to useless levels so far...
PS: http://xkcd.com/810/

Resources