I'm not 100% sure if this is a programming question, but I do believe I'm targeting the correct audience for this issue.
I've built a web-based frontend for an application. Now the frontend will be deployed to the customer's machine (localhost-based website). However, this frontend uses Google Maps V3 and some other external components. It will need internet access, but the customer network is highly secured. Here my issues begin.
To make sure everything works as planned, we need to allow the connections that are being made when starting up the webpage, so I need a list of URLs that my frontend is using when starting up. I mainly need the google maps URLs, they are so varied (googleis.com, gstatic.com, ...)
How can I get a list of these URLs? Is there any Google documentation (didn't find any)?
I've thought about using Firebug and listing all entries in the Network tab. However, that scales to about 2000 items (including all images, scripts, CSS stylesheets etc that are being loaded from the local website).
Or is there a tool/workaround to easily find out which connections should be explicitly allowed for the website to work like it should?
Your approach of using the Firebug - Network tab is good. The Chrome Developer Tools - Network view is also very good. I haven't seen a list of everything that gets loaded by the map, but that is because it varies based on how you set up your map. I know that Google works hard to only load what is needed by your map, based on your options.
So if you only use selected map controls, Google will try to limit the image downloads to just what is needed to display the controls your map needs. Of course, if you include additional items, such as using a parameter on the URL that loads the drawing tools (libraries=drawing), you will have additional network loading. Google defined these "extra" items as libraries to avoid loading everything; just those that need them will have to load them.
Other than setting your map up and watching what is loaded, I can't think of another option.
Related
After having embedded google drive files on my website (awesome feature), I found a minor drawback.
When clicking on one of the maps in the list, it will redirect/link the viewer on my page to the google drive site. However, I want to keep the viewer on my page and the folder to open within my own website.
Also I want other folders within these folders to open within the borders of my website, and so on and so forth.
The used code is simple:
The used website is Typo3 based.
Does anyone have a solution for this problem?
Thank you very much in advance; all replies and suggestions are highly appreciated!
After a quick search it seems to me this is more a hack than an official google feature, so probably there's no easy way for altering the behaviour of the stuff inside the iframe. I would rather recommend setting an outbound link and accepting the fact that you're hosting the files at Google.
In the future, there might (or might not) be a File Abstraction Layer Adapter for Drive coming up: http://wiki.typo3.org/FAL_Adapters. Well, probably not so soon. But for Dropbox!
I want to be able to download the entire contents of a website and use the data in my app. I've used NSURLConnection to download files in the past, but I don't believe it is capable of downloading all files from an entire website. I'm aware of the app Site Sucker, but don't think there is a way to integrate it's functionality into my app. I looked into AFNetworking & ASIHttpRequest, but didn't see anything useful to me. Any ideas / thoughts? Thanks.
I doubt there is anything out of the box that you can use, but existing libraries that you mentioned (AFNetworking & ASIHttpRequest) will get you pretty far.
The way this works is, you load the main website. Then you go through the source and find any resources that that page uses to display its contents and link to other pages. You then need to recursively download the contents of those resources, as well as its resources.
As you can imagine, there are few caveats to this approach:
You will only be able to download files that are mentioned in the source codes. Hidden files or files that aren't used by any page will not be downloaded as the app doesn't know of their existence.
Be aware of relative and absolute paths: ./image.jpg, /image.jpg, http://website.com/image.jpg, www.website.com/image.jpg, etc. could all link to the same image.
Keep in mind that page1.html could link to page2.html and vice versa. If you don't put any checks in place, this could lead to an infinite loop.
Check for pages that link to external websites--you probably don't want to download those as many websites have links to the outside and here you downloading the entire Internet to an iPhone with 8GB of storage.
Any dynamic pages (the ones that use a server side scripting language, such as PHP) will become static because they lose their server backend to provide them with dynamic data.
Those are the ones I could come up with, but I'm sure that there's more.
I'm trying to get my Chrome extension working with the Google Calendar API. However, the way Google has set up the extension sandbox makes anything almost impossible.
I can't add the Calendar API using JavaScript because I've tried 200 different ways to include the http://www.google.com/jsapi library. Therefore, I want to try interact with the Calendar API with PHP. Is it even possible to do a POST from a Chrome extension in order to run my PHP file? If not, it's pretty much impossible to interact with any external API that doesn't have a downloadable library, isn't it? If that's the case, I don't see how you can make anything useful with Chrome extensions.
I think you are still having difficulties because you don't completely understand the difference between content scripts and background pages.
Content scripts have certain limits. They can't:
Use chrome.* APIs (except for parts of chrome.extension)
Use variables or functions defined by their extension's pages
Use variables or functions defined by web pages or by other content scripts
Make cross-site XMLHttpRequests
Basically all they can is access DOM of a page where they were injected and communicate with background page (by sending requests).
Background page thankfully doesn't have any of those limits, only it can't access pages user is viewing. Good news is that background page can communicate with content scripts (again through requests).
As you can see background page and content scripts supplement each other. If you use both at the same time you have almost no limitations. All you need is correctly split your logic between those two.
As to your initial question - content scripts can't make cross domain requests, but background pages can. You can read more here.
I have a filemaker database that I need to be able to link records and all associated data (including container field data) to various points placed on a large PDF image, and then make that data appear via instant web publishing when someone clicks on the marker for that area on the PDF. For example the PDF may be an image of a car, and then I would have various close up images of issues with the car and descriptions of those images as records in the database. I would then want to drop points on the base PDF image and when you clicked on those points be able to see the close up images and other data related to those images.
I'm being told this is too much for IWP because:
I need to place the markers outside filemaker via PDF annotation
Filemaker IWP can't handle the number of markers that may be necessary (it could be up to 1,000 on an E sized image.
Does anyone have a work around or explanation why this is a problem?
If I understand correctly, you would like to setup a PDF with links that will open a browser and show data related to what was clicked. Assuming that is the case, the reason this wont work is because IWP does not provide a unique URL for a unique page. For example, here on StackOverflow you can directly link to any question based on its URL:
http://stackoverflow.com/questions/3207775/ -- this question
http://stackoverflow.com/questions/4973921/ -- some other question
IWP uses Javascript and session variables to manipulate the output to the screen, so there is no way to link to a specific section of your IWP site, since the URL is always something like:
http://yoursite.com/fmi/iwp/cgi?-db=YOUR_DB-loadframes -- Product A
http://yoursite.com/fmi/iwp/cgi?-db=YOUR_DB-loadframes -- Product B
http://yoursite.com/fmi/iwp/cgi?-db=YOUR_DB-loadframes -- Product C
Because of the limited nature of IWP, you will not be able to workaround this issue. You'll need to build your own web-interface using the Custom Web Publishing Engine, either using the built-in PHP extensions or some other technology where you invoke the XML publishing API.
I agree with Nate
IWP is the wrong solution to this problem. You'd be better off simply hosting those images on a webserver.
Now here comes the plug, you can use SuperContainer to really simplify the management of the images from FileMaker.
We want to remove the /dotnetnuke/ from all 300 pages of our website that has been running since Feb 09.
Google isn't indexing all of our pages just 98. I'm thinking that the /dotnetnuke/ is causing our content to be too deep in our site for Google to find(?)
We also don't have any Page Rank although our site appears on page one for most search queries. Obviously we don't want to lose our position in Google.
Would you advise that we do remove the /dotnetnuke/ in our urls and if so should we create a new site and use 301 redirects or is there a way of removing the /dotnetnuke/ from our existing urls but still keeping our Google history?
Many thanks
DotNetNuke uses its own URL rewriting which is built in to the framework. DotNetNuke uses the provider model, so you can also plug in your own URL rewriter or secure one from a third party. If that is what you need, I'd suggest taking a look at Bruce Chapman's iFinity URL Rewriter as a quality free third party extension to DotNetNuke. He also offers a fancier commercial version called URL Master, which I haven't needed to use as of yet.
However, I believe the /dotnetnuke/ you're referring too may not actually be part of your "pages," but the actual alias of your DotNetNuke portal (i.e. www.yoursite.com/dotnetnuke). This would mean that /dotnetnuke/ is part of your base path for all pages because using the base path as an identifier is how DotNetNuke determines that you want to load a particular portal. If this is the case, you could potentially just change your portal alias to be www.yoursite.com (depending on the level of access you have to the site/server).
Lastly, sometimes virtual pages do not get included in DotNetNuke's site map. If you are using a third party module for your dynamic content - it may in fact not be represented on your site map. I'd look in to what pages are currently represented on your site map as well.
In IIS7 you can use URL rewrite functionality to hide /dotnetnuke/.
301 redirect will also work fine (just make sure you are not using 302 - Google doesn't like it)
Another answer in adition to the first 2 is that you are running DNN on GoDaddy hosting. Godaddy has a strange way of setting up sites here is how you can remove that problem
Set up a second (non primary) domain. Under domain management, you can actually assign the second domain to point to a subdirectory. Make sure that the subdirectory is whatever you set dnn to
Might have this wrong as i got it off godaddys site but have done it twice and got it to work correctly