I'm wondering if anyone has any advice regarding using PhoneGap to send and receive information from a web server. Is there a standard way of doing this? Any best practices? I'm pretty new to app development and any advice would be helpful.
Thanks
I personally use jQuery ajax. The awesome thing about phonegap and running js on a phone is that you have no normal javascript security issues like crossdomain issues.
One thing you need to remember is that in order to reach outside servers you will need to add a new key to your plist in your external hosts
KEY: websites
VALUE: *
the * is a catch all so any domain can be accessed.
as for the ajax treat it like a normal ajax request:
$.ajax({
url:'http://your-url.com/script.php',
type:'post',
data:'arg=foo&argB=bar',
success:function(data){
console.log(data);
},
error:function(w,t,f){
console.log(w+' '+t+' '+f);
}
});
good luck happy deving!
I've got a few phonegap tutorials on my blog -
http://www.drewdahlman.com/meusLabs/
Use any AJAX you want.
Remember to allow the server you're going to communicate in your config.xml file!
<access /> - deny all
<access origin="*" /> - allow any
<access origin="http://example.com*" subdomains="true" /> - allow all of example.com
There are more examples in the config.xml file.
Related
We have several interactive kiosks with files that run locally off of the hard drive. They are not "local" hosted, no web server involved. We have internet connectivity. Can we track user interactions with Google Tag Manager? I have read a few posts that seem to indicate it's possible, but the set up has changed dramatically since they were authored.
We have our GA and GMT setup, with the appropriate scripts embedded within the local html index file. We have set up a container, and several tags and triggers for simple tracking of page views. But there is no live data coming into my GA dashboard. I am sure I am missing steps if this is possible. Any help much appreciated.
Hoping I am headed right direction here - but still no tracking - where do I get a clientID to manually pass in? Thank you!!!
<script>
(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
(i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
})(window,document,'script','https://www.google-analytics.com/analytics.js','ga');
ga('create', 'UA-XXXXXXXXX-X',{
'storage':'none',
'clientId': 'XXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXX'
});
</script>
Your question is about GTM, but it is much more likely that your problem is with Google Analytics. There is nothing that prevents GTM from running in a local file (unless you use a very old GTM snippet - I think before GTM switched completely to https, Google used an url without protocol, that would need to be changed), but Google Analytics will not work in a default installation if it cannot set cookies (which in a local file it can't).
At the very least you would have to set the "storage" field to "none" in your GA tag or GA settings variable, and then pass in a client id manually (in a kiosk it is rather hard to determine when a new visit starts, so maybe you could set a different client id every time users return to a home screen or something like that. Or you just live with the fact that everybody is the same user in GA).
I am looking at my asp.net mvc site and considering blocking the HEAD verb in IIS from accessing the site.
I don't see why such requests are needed or being used at present.
Why would HEAD requests be required on a site?
The comment posted above is correct. As far as I know, HEAD request are made by the browser for checking things like...do I need to download this again, is the page still there, etc. Basically, if the browser wants to know about a page without downloading the entire page, it will issue a HEAD request. So, in short, they are not a harmful thing to be happening.
However, if you want to block these, you can do so in your web.config by using the following syntax (taken from MSDN/IIS)
<configuration>
<system.webServer>
<security>
<requestFiltering>
<verbs applyToWebDAV="false">
<add verb="HEAD" allowed="false" />
</verbs>
</requestFiltering>
</security>
</system.webServer>
</configuration>
However, I think this is an atypical setup and you may want to test your site for performance /breaks across multiple browsers before turning this on for a production facing site.
There are malicious scanners that will issue a large volume of HEAD requests in an attempt to find known vulnerable resources, such as file upload components that may allow them to upload malicious files. They use a HEAD request as it is faster than a GET request because it has no response body, just headers.
Not only is their intent malicious, but by requesting large numbers of non-existent resources they can put load on your server.
On the flip side, Google also use the HEAD request to save time and bandwidth when deciding whether to re-fetch a page (i.e. has it changed since I last crawled).
Ideally, you would find a way to block the malicious requests and allow the legitimate ones from Google / Web Browsers.
I have added a custom domain to my Heroku application and it works fine, but the application still responds to {mysubdomain}.herokuapp.com.
To prevent duplicate content I would like to stop having my application respond to the subdomain. Is there some setting in Heroku which does this for me, or do I need to code a 301 redirect?
Another option is to use the rel="canonical" link tag. This tells search engines which URL to use for content that may appear on multiple URLs:
<link rel="canonical" href="http://www.example.com/correct_url">
Here's what google has to say: http://support.google.com/webmasters/bin/answer.py?hl=en&answer=139394
(Your use case is explicitly mentioned at the bottom.)
You would need a 301 redirect. Heroku will always respond to the .herokuapp.com domain of your app
I created the hide_heroku gem to handle this- it uses X-Robots-Tag HTTP headers to prevent search engines from indexing anything under *.herokuapp.com
I don't believe it's possible to remove the Heroku-provided domain name, either via their web interface or the command-line client. If you're concerned about it, redirect or add a robots.txt to your site that blocks when accessed via .herokuapp.com (I don't know how to do that offhand, sorry).
I suspect Google is reasonably smart about indexing Heroku sites and handles the dual-domain issue itself, but that's just a guess.
I am trying to do OmniAuth OpenID with Google Apps in Ruby on Rails. I know it should work out-of-the-box if I specify ":identifier => 'https://www.google.com/accounts/o8/site-xrds?hd=example.com'" where example.com is the domain that my targeted users come from.
The user can get redirected to Google when accessing /auth/google without a problem, and this openid.identity can be returned from Google:
... &openid.identity=http://example.com/openid?id=xxxxxxxxxxxxxxxxxxxxxxx ...
However, the example.com I am working with does not have the correct "rel='openid2.provider'" <link /> tags set up at http://example.com/, therefore the discovery fails when omniauth-openid tries to check with Google again.
Is there a quick and clean way to work around the default discovery behavior so that I can define https://www.google.com/a/example.com/o8/ud?be=o8 as the server directly without performing the automatic discovery?
Thanks!
I think omniauth-openid uses ruby-openid. If so, you should be able to get it work easily:
gem install ruby-openid-apps-discovery
Then throw in somewhere before making the request
require 'gapps_openid'
Google Apps has a slightly different discovery protocol, which is what that gem provides.
Before using the gem that Steve recommended, I came up with a workaround to make the entire discovery process happen locally only, which I find might be useful to some people. If you only accept users from a single Google Apps domain, you might want to:
Add a line like 127.0.0.1 example.com in your /etc/hosts.
set up a lightweight HTTP server like nginx, create a file called openid (do not append .html), and add your <link rel="openid2.provider" ... > tag there.
This is slightly faster than using ruby-openid-apps-discovery since it saves your application from sending some requests to an external https server.
I'm seeking for solution how to isolate widget included by partial to main site. Issue appear when user access site with https. Ie 6,7 shows security confirmation dialog (part of website resources are not in secure zone).
First of all I download twitter widget on our side, also I download all CSS and pictures. Then I patched widget JS to point onto downloaded resources. But still has not luck with security warning :( I guess the reason of this issue is AJAX request to twitter, but there is no idea how to sole it. (Just to create some kind of proxy on our side).
Thank you for attention.
You just need to host the .js file on your server, and link to that. That is all.
The script auto detects SSL and will make requests to https://twitter-widgets.s3.amazonaws.com/ instead of http//widgets.twimg.com/ dynamically depending on your scenario.
Hope that helps!
geedubb
I got the Twitter Widget to work over HTTPS (SSL) by doing the following:
Save every image, css, and javescript file on my local webserver
Changed every "http" to "https" in the javascript AND in the css
The last piece was tricky. https://twitter.com/statuses/user_timeline.json brings back data that already includes "http"; namely avatars and the profile image. So, I found about four places in widjet.js that used the user_timeline.json data. I hardcoded an image url where ever that "http" data was used. Searching "src" will located all of those places.
It's an ugly fix, but it worked.
You can use a sniffer like HttpWatch to debug this--watch the requests going by and see which ones start with http instead of https. It may be possible to just change the urls you use to point to https://twitter.com, not sure about how your widget works.
thanks Keshar, worked for me. I came to the same conclusion that all http requests had to be https to prevent the IE security warning and also display the twitter feed. I used the live HTTP headers firefox plugin which helps for showing any non-secure http requests, such as the JSON requests.
Jon
If you look through the script there are calls to a https site. If you simply replace the protocol/domain with
https://twitter-widgets.s3.amazonaws.com/
instead of
http//widgets.twimg.com/
it works and you don't have to do anything else.