SpawnDomUri: Restrict to specific Dom-Node - dart

I want to start some Isolate, which manipulates a specific area in my webpage.
To achieve this, I create such an Isolate via the function SpawnDomUri, which is able to access the DomTree.
Apparently, some malicious/erroneous Isolate may change the whole webpage, which may not be desirable.
So my question is:
Is it possible to restrict the access of a Dom-Isolate ( which is started via SpawnDomUri ) to a specific Dom-Node ( incl. ShadowRoots )?
Best Regards,
Alex

I don't think this is possible. I once saw an experiment from MS to try and allow this sort of sandboxing; but I don't believe it's something any major browsers have ability to do today.
Most people tend to use iframes to isolate them in this way (rightly or wrongly!).

The only solution that comes to my mind is to use a non-DOM-Isolate and expose an API on the root isolate that can be accessed by sending messages that only exposes/executes allowed invocations.
This is of course very cumbersome but as Danny said there is no direct support for your requirement.

Related

Can a dash app be filtered via URL like Power BI

I am embarking on a POC to replace a Power BI dashboard that can’t do all the visualizations we need with a dash app. One major requirement is to be able to pass multiple filters to the app via url in a manner similar to the Power BI capability.
I have tried to research this and see references to URL callbacks and believe this provides the functionality I will need, but I don’t yet understand dash apps well enough to be sure.
I’m not asking how to, just whether or not it can be done. Thanks!
You can. Use the dcc.Location component (docs), and structure any callbacks that need to listen to the URL to have an Input based on that component. You can even pass multiple things with it, such as "filter_1/3/filter_2/5/filter_3/1" and then .split('/') to break up the string and parse the values.

How to cause standard dialogue of properties of service?

(Example: start services.msc, click 2 times on any service)
If it is possible an example on delphi, please
David is probably right ;-) In any case, whatever you want to do with services programatically will probably involve the use of the Service Control Manager API
http://msdn.microsoft.com/en-us/library/ms684323%28v=vs.85%29.aspx
and here's another SO question that might help you further
How can I disable a service via Delphi?
and here's some old code that gives you a head start
http://www.swissdelphicenter.ch/torry/showcode.php?id=1322
As far as I am aware, there is no official programmatic way to show the services control panel UI to the user.
Consequently I believe that you must look for another solution.

What's the best service to use for filtering out spam/abuse/malware links for a link shortening webapp?

I have two services - Lincr and LinkBunch. Lincr is a plain jane URL shortening service, while LinkBunch lets you shorten multiple links into one link. I've had too much spam posted into the services, so I had to shut down Lincr. Now, even LinkBunch seems to be facing the same problem, and it's been disabled by my web host for that reason.
I can't keep shutting down sites like this because of bad links being posted, so I need a malware-filtering API that I can use to filter out the links as and when they are posted.
There are services that let me download an entire bunch of bad links to check against, but instead, I'd prefer doing a live API call on a per-link basis. What can I use for that?
Finally, what's the best malware filtering service out there?
Lincr is down. On LinkBunch, where is your Captcha?
On either site, do you limit the number of posts by IP? Do you use a delay in your response? What about using hidden fields to reduce spam (http://www.reviewmylife.co.uk/blog/2008/05/30/hidden-field-spam-trap-for-phpformmail/)?
I know I'm dodging the question a bit, but you should at least take basic anti-spam measures before resorting to API calls. Even APIs will still fail for newly-hacked / newly-spammed sites.

Search Engine without crawling?

Is there a way to collect web content in order to use it in a search engine without passing by the web crawling phase? Any alternative to web crawling?
Thanks
No, to collect the content you have to...collect the content. :-)
Yes (and sort-of no).
:)
You can download existing data dumps from various websites (wikipedia, stackoverflow, etc.) and construct a partial index that way. It obviously won't be a complete index of the internet.
You could also use meta-search to construct your search engine. This is where you use the APIs of other search engines and use THEIR search results as the basis of your index. Examples include citosearch and opensearch. duckduckgo uses yahoo's boss api (and now yahoo uses bing...) as part of their search engine.
There are also real-time streaming APIs that you could use instead of crawling the web. Look at datasift as an example. There are lots more resources you could cleverly use and avoid/minimize crawling.
If you want to be updated with the latest content on pages, then you can use something like pubsubhubbub protocol to get push notifications for subscribed links.
Or use paid services like superfeedr that make use of the same protocol.
directly or indirectly you have to crawl the web in order to get the content.
Well if you don't want to crawl, you can follow a wiki-like approach, where users can submit links to sites (with title, description and tags). So a collaborative link collection can be built.
To avoid spam a +/- system can be involved, to vote useful sites or tags up and useless ones down.
To avoid spammers mass voting SERPs you can weight votes by user reputation.
User reputation can be gained by submitting useful sites. Or somehow tracing usage patterns.
And considering other abuse patterns too.
Well, you got the point, I think.
As spammers gradually discover weaknesses of traditional search engines (see Google bomb, content scraper sites, etc.), a community based approach may work. But it would suffer severely from the cold start effect, and when community is small the system is easy to abuse and poison...
At least Wikipedia and Stack Exchange is not spammed to useless levels so far...
PS: http://xkcd.com/810/

How do I detect a mobile browser, and direct appropriate content to it?

I've read that its bad (not advised) to use User Agent Sniffing to send down the correct content for a mobile browser, so I'm wondering what IS the best way to do this?
I'm using ASP.NET MVC, and I've built my site and it works well on desktop browsers, so I'm looking to begin building a mobile version. When a mobile browser comes to my site, I'd like to use a different set of Views, which ideally posses the following attributes:
Link to pre-scaled images
Use minimal javascript
Remove all but essential content
My first thought was to sniff the user agent, and then send down a different .CSS file, but as stated above I've read that this is a bad way to do this, so I'm asking you for your thoughts.
The user agent is really all you have in a HTTP GET request, but you should let someone else maintain the list. We use the Microsoft Mobile Device Browser File with a custom view engine in a manner roughly similar to this Scott Hanselman post.
The best way to detect a mobile browser is to use this wonderful codeplex project:
http://mdbf.codeplex.com/
For background on how you could create targeted views read here:
http://www.hanselman.com/blog/MixMobileWebSitesWithASPNETMVCAndTheMobileBrowserDefinitionFile.aspx
The simplest approach could be use a separate domain "m.yourdomain.com" or "yourdomain.mobi" (Source) that way you can assume that the user is on a mobile device.
While I believe it's frowned upon to sniff for browser to determine capability and you should use capability sniffing, such as JQuery.support. When it comes to actually presenting significantly different layouts then I think you have to sniff for the browser ID and act accordingly.

Resources