How can I get it work: Embedding Twitter's Web Intent - twitter

Normally Web Intent is used as a pop-up. According to Twitter, it also provides embedding functionality.
"Some sites may prefer to embed the unobtrusive Web Intents pop-up Javascript inline or without a dependency to platform.twitter.com. The snippet below will offer the equivalent functionality without the external dependency."
The snippet can be found at https://gist.github.com/894540#file_intents.html
See: http://dev.twitter.com/pages/intents
However, I can't get this snippet work. I copied the snippet(JavaScript) code to an html file and open that in a browser. Nothing happened! What should I do to make it work?

I had the same problem during NYC Startup Weekend.
The snippet they provide does open up the Twitter popup, as required, but the ability of the Twitter popup window to pass a message back to your web page is a little more complicated. You will need to understand how their widgets.js code works and reproduce what's necessary to set up the RPC framework. My short-term workaround was to include a slightly modified (un-obfuscated) version of widgets.js that would not replace my button with theirs.
I will be tackling this in a week or two, if you can wait.
... or you can just include their widgets.js directly :)

Welcome to the vague world of the Twitter API documentation.. I think you're misunderstanding, no small part to the incredibly poor wording on the API page:
Embedding that javascript code allows you to open the twitter web intent pages in a "twitter style" popup without requiring a dependency on platform.twitter.com. It does not allow you to embed a twitter intent.
You can see it at work by adding that javascript and then adding the following to your page
<p>New</p>
Clicking "New" will open it in a popup. Remove the javascript, and clicking "New" will navigate the page away to the tweet box.
It's incredibly frustrating to me they don't, at least to my knowledge, allow for proper embedding. I know why they oped for this method, but it's frustrating none the less.

Related

Reverse engineering website : cannot find the form inputs in the post request

I need to interact with an external website I don't own. This external website requires credentials that I have. My goal is to add a user but the external website does not offer an external API. It looks like they are using Vaadin.
So to add a new user I need to manually fill in a form. Yet I have been searching for the route the "form" takes to post the input I give but could not find any.
Here is my issue : when I look at the HTML source code in browser I cannot see any form tag. The buttons have all the same id "button". When I fill in the form and look at the network tab in the developer tools, in the "parameters" section I cannot see the inputs I just gave although the POST request does appear. The cookies tab does not show the inputs either.
Consequently my questions are : why can't I find the inputs in the POST request and where can they be ?
Please note : this external website is a medical site so I prefer not share the url and they don't offer a mobile app, so there is no mobile API I could reverse engineer.
Any help appreciated :-)
Not stating the Vaadin version makes that a tad harder give an exact
answer, but at the core both the Vaadin 8 and 10+ behave the same way.
And the short answer to your question is: without another entry-point,
like an API, this can not simply be done using just some POST-Request.
Vaadin is not simply a html-form/request/response-html based framework;
it holds the scenegraph on the serverside in a session. All
communication is done via a single endpoint to the server and state
changes only are communicated back to the client.
For what you are after, your best bet is to use test automation
frameworks like selenium, geb, cypress, ...

Show disclaimer site when opening external link

I'm running a typo3 v. 7.6.4
I alredy looked into existing plugins an even how to write my own... but i can't find a solution.
My goal is pretty simple:
Show a simple disclaimer page whenever the user clicks a link to any external page.
Is there any easy ways to accomplish this?
The easiest way would in fact be to add a on('click') eventHandler on all links. This would be additional JavaScript and work with all existing content. Figuring out if a link refers to an external site should be easy (exclude relative urls and match absolute urls against your baseUrl).
However, if this is a legal requirement, you should decide if JavaScript works for you, because with disabled JS the disclaimer would not be triggered.

How is this URL modification possible?

Could anyone please tell how the site http://www.outsharked.com/imagemapster/default.aspx?what.html is working in such way? Modifying the url without loading/reloading the page. I think this is not done by html5. Because it works in IE6 which doesn't support html5.
I created that site. The commenter is correct, it uses Javascript to change the URL. There's nothing about how that navigation works that is different for IE6 - that browser supports the necessary client-side functionality to do this kind of thing. The basic functionality involves:
capturing click events on the nav, and loading the inner content via AJAX
update the URL to reflect a working direct URL to target.
The links also are valid anchor links that, in the absence of Javascript, would go to the same page (but load the whole thing). This is your basic AJAX web site setup with one minor difference. It's common practice to use a URLs like this in AJAX/single page web sites:
http://mysite.com/home#somepage
or even just
http://mysite.com/#somepage
Where the hashtag part represents the actual page a user has navigated to. If someone accessed that url directly, e.g. from outside the site, the site would use Javascript to load the correct content based on the hashtag, after the page had loaded. This means that there might be a little delay for the inner content to reflect the correct page, since it has to run another request after the initial page has loaded from the browser to get the inner content via AJAX.
I was trying to avoid that by creating a setup that worked completely with and without Javascript. If you go directly to a URL within the site such as http://www.outsharked.com/imagemapster/default.aspx?faq.html you will notice it loads the content directly. This URL will work even if Javascript is disabled. You can't actually do this using hashtags, since hashtag content is not sent to the server. Only the client knows what's after the hashtag in a URL. That's why I was using query strings to represent inner pages.
This site architecture was sort of an experiment at the time. It works pretty well but the code isn't fantastic, I didn't really do anything else with it, and I'm sure there are other better-fleshed-out/tested/full-featured frameworks out there to do much the same thing.
But it might not be a bad example of the nuts and bolts of creating a basic AJAX navigation setup, as a learning tool, since it's pretty concise, and also does HTML5 history navigation (e.g. so the back button works on modern browsers).

Login to a site using the cocoa framework

I am creating an ios app that needs to download a html page and extract some information from it. To get to the page I also need to login. I have looked everywhere for some code on how to login to a site using the cocoa framework, but every answer I see only seems to answer half the question. Here is the login site: romres.ist-asp.com. I need some code for writing something in the first field (the other two are left blank), then submit the form and then I need to be able to see the next page. I believe apps like Facebook should use som of the same technology, where you log in to a facebook and then you can see the contents of your profile.
Basically what you want to do is called scraping.
Scraping is really easy for sites that don't require authentication, but in your case what you should do is to inspect the POST request being made when logging in the site your interested in (try to understand of the service respond) and the POST request made, when already logged in, to retrieve each page.
The purpose of all of this is to have later the possibility to simulate regular HTTP requests that should came from a browser via code.
If you have any doubt ask in the comments.

Display twitter feeds

For an event in a couple of weeks I'd like to make an web page/app which display tweets from a specific user, a specific hashtag and all #reply's at the first user in 3 boxes on the screen.
However I've never tried this. I want to use either .NET (C#) or HTML/CSS/JS since I'm proficient in those. Are there any libraries/API's I can use? Or is there an readily available freeware/open-source app I can use?
Have you seen TweetSharp?
Use Twitter's profile and search widgets. Profile for the first box, a search of the hash tag for the second box, and a search of to:username for the third box.
I actually just posted this as an answer to another question:
I just updated a plugin to work with the Twitter 1.1 API. Unfortunately, per Twitter's urging, you will have to perform the actual request from server-side code. However, you can pass the response to the plugin and it will take care of the rest. I don't know what framework you are running, but I have already added sample code for making the request in C#, and will be adding sample code for PHP, shortly.
The plugin makes a call to statuses/user_timeline, but you will likely want to look at statuses/filter or statuses/search, instead. All you will have to do is add your desired parameters (hashtag, replies, etc.) to the server-side code and it should work (with the addition of your security keys and tokens, of course).
Good luck! :)

Resources