We use Microsoft Teams desktop for daily communications. We have other tools which create custom URL protocol links which greatly aids communication via email, etc. The issue is Teams does not seem to recognize those URLs. It'll underline them (presumably from the underlying copied formats), but nothing happens when you click. Non custom URLs such as onenote:// work. Has anyone gotten custom URLs to work in Teams?
Related
The question I'm trying to answer for a set of users is how other users end up on their page. There are about 5 different ways a user can end up on your page. For example, they could have searched your name, clicked a link from a newsfeed or received an e-mail with a link to your page.
What is the best way to accomplish tracking these events? I'm initially inclined to create a table to track this. Each link would send an async event to the server to be added to the table. However, I'm also aware that there are many tracking services out there such as Google Analytics and Mixpanel. I've looked at their docs briefly and they don't seem to fit my need.
Am I missing something? Is it worth it to create a "custom" even tracking system to accomplish this?
It is not worth creating your own service. Plus you cannot add async link to search engine result pages or emails (that would require tracking code that you cannot implement in search engines or that would not be executed in mail clients).
Web analytics software tracks traffic sources by analyzing the incoming traffic via its http headers. If there is a referrer set the traffic will be attributed to, well, the referring site, unless the traffic is included in a list of known search engines in which case it will be attributed to organic search traffic etc.
In most systems you can customize source attribution by adding query parameters in the url (obviously this will not work with search engines and the like, since you cannot add parameters to organic search results). For example with Google Analytics you can add custom campaign parameters in email links or advertising campaigns. If people click on those links the parameter value will be send to GA and the source/medium/campaign information will be set accordingly (e.g. traffic from web mail clients would usually be attributed as a referrer, but campaign parameters allow to attribute the link to your mail campaigns).
There might be reasons to create your own system, but channel attribution is not one of them; GA and every other system I know of has this thoroughly covered.
I am currently building a Sharepoint site in SharePoint 2013 that has sites for both internal and external purposes. The "internal" sites are accessible to our development teams, while the "external" sites are used to share content with clients and control their access.
I need to have the external sites display document / content lists that live in libraries either in the internal sites or their sub-sites. I am currently doing this through content query web parts. I am able to get things working functionally, but the URL of the displayed items exposes site structure and hierarchical information that I do not want to make visible to these external visitors.
Is there any way to mask or alias the URLs? The site's organizational structure must stay intact in order to maintain its permission inheritance. In essence, the external sites should act a landing page for our clients with the purposes of promoting collaboration.
You could use short URLS or document IDs (for documents - for lists see below). For short URLS:
https://www.codeplex.com/site/search?query=short%20url&ac=4
Otherwise, you have a tedious task, involving code writing. Our solution was to write webparts for showing it and put in URL parameters the GUID of the list and the ID of the item. You could put the GUID of the (sub) site but the ulr will be too long (so what????).
Yet, exposing the structure of information wasn't our concern - our concern was the structured way of showing it.
I'd like to use iOS to post on my users's facebook walls/tickers/news feeds. I learned that opengraph can be very specific about the actions users take inside my app, and I'd like to integrate them into my project.
I think I realize now I am going to need my own server running for opengraph actions to work ,right? or is this not a must? from what I understand, the server supplies the basic data to facebook for the post, like image, main text, secondary text etc...
Is my server needed just to supply the facebook posts' data? Is my server called everytime a facebook page is loaded with my app's contents? Or is it done only once, and facebook is copying the posts' content into facebook's servers?
What happens if my servers is not responsive etc?
The short answer: yes, you probably need a server.
The longer answer:
The facebook documentation on Open Graph is much better than what I can fit here. If you have not already, check out this page and its links: https://developers.facebook.com/docs/opengraph/.
A published action on facebook is a tuple { user, action, object }. The types of actions and objects are defined in the facebook developer application (developers.facebook.com/apps).
The content of the post is generated by your iOS client. The post has data that references the action by name and the object by its URL.
The individual objects that your app defines are typically represented by pages on your web server. These pages are scraped by Facebook to extract metadata that defines the object, including images and text. I do not know of safe assumptions you can make about when the object's page will be scraped.
It is possible to create sample objects when you are editing your object types (developers.facebook.com/apps, create or edit one of your apps, "Edit Open Graph", "Add Sample Data"). However, because these are intended for experimentation, they are fairly limited in what you can do with them.
I've been doing some programming off and on for my brother, who is a stock trader. I'm wondering if it is possible to receive a push notification when a site server adds a page. For example, the site smallcapfortunes.com frequently adds pages that are simple extensions off the main URL. For example, the site regularly adds pages under URLs such as /neca/, /stev/, etc.
Are there existing methods to execute this? Or is this something I need to write myself? Has anyone here written anything like that?
I know there are existing sites to track basic updates to a single page. In my research, though, I haven't found anything like this.
Please let me know if there are any other details I need to provide.
Generally you can only get a push notification if a specific website offers that service.
Some websites publish a structured (XML) site map. If the one you're interested in does that, you could pull that sitemap on a regular basis and look for differences.
you're most likely going to want to use http://scrapy.org/ to go through the site and find new /neca/ and /stev/ urls, etc, then just trigger the script every so often.
I'm trying to get my Chrome extension working with the Google Calendar API. However, the way Google has set up the extension sandbox makes anything almost impossible.
I can't add the Calendar API using JavaScript because I've tried 200 different ways to include the http://www.google.com/jsapi library. Therefore, I want to try interact with the Calendar API with PHP. Is it even possible to do a POST from a Chrome extension in order to run my PHP file? If not, it's pretty much impossible to interact with any external API that doesn't have a downloadable library, isn't it? If that's the case, I don't see how you can make anything useful with Chrome extensions.
I think you are still having difficulties because you don't completely understand the difference between content scripts and background pages.
Content scripts have certain limits. They can't:
Use chrome.* APIs (except for parts of chrome.extension)
Use variables or functions defined by their extension's pages
Use variables or functions defined by web pages or by other content scripts
Make cross-site XMLHttpRequests
Basically all they can is access DOM of a page where they were injected and communicate with background page (by sending requests).
Background page thankfully doesn't have any of those limits, only it can't access pages user is viewing. Good news is that background page can communicate with content scripts (again through requests).
As you can see background page and content scripts supplement each other. If you use both at the same time you have almost no limitations. All you need is correctly split your logic between those two.
As to your initial question - content scripts can't make cross domain requests, but background pages can. You can read more here.