How to perform dynamic URL rewrite...in code - url

I am being asked if I can setup a way to - on the fly - dynamically do a URL rewrite.
My experience with URL rewriting has been primarily using essentially static web.config files where I knew ahead of time what the conditions were I was supporting. But in this case, I'm working with a partner who is sending me data about their clients and when a user of my site gets sent to one of those pages, they'd like me to rewrite the URL so that it looks like their client's URL and not mine.
Example: the search my site for Jim's auto shop, when I display my (their) content about Jim's auto shop, the URL wouldn't appear to be on my site, it would show "www.JimsAutoShop.com" when it's really on "wwww.mysite.com/JimsAutoShop"
I suppose every time our partner pushes we data where this is needed, I could rewrite the web.config file adding a section for that case, but I really don't know that that's a good idea. is there a way to essentially do this dynamically via code, where when I query my db from a search and see I need to mask the URL, I could do that?
Tech wise, i do not have access to IIS, I'm on a shared server running IIS and my primary application stack is Coldfusion10. Thanks

I don't believe this is possible. By the time your server side language gets the code, everything has been processed on the web server. There is nothing to rewrite. You could technically do this with Javascript but it would just be visually, it wouldn't actually be changing the URL. (Not sure you could visually change the domain, but I don't see why not. I've done it before with other parts of the URL). Here is how you would do that essentially: https://developer.mozilla.org/en-US/docs/Web/API/History_API
If this needs to be done though, the web.config route is the way to go. I had an application where when data was updated using certain forms in the app, I would grab the web.config and edit one of the rewrite maps.
But I'm not so sure that is what you need. If you want the domain www.JimsAutoShop.com to just pull files form your server, just edit the DNS to point to your server. Rewriting/Redirecting isn't needed. That is how sites are supposed to work.

Related

Is it possible to use URL folder/path without creating a corresponding directory on the server?

Suppose I have the following URL, which displays some unique information for the specified user:
http://example.com/?user=john-smith
I want this URL to look like this instead, because it's (supposedly) more SEO-friendly and human-readable:
http://example.com/user/john-smith
Of course I can do it by creating a "user/john-smith" sub-directory in the root and putting an index.php or default.htm or whatever in there.
But I might have millions of users and I really don't want to create millions of sub-directories like this (not even sure I can).
So, how can I make it so that a user enters "http://example.com/user/john-smith" in the browser, but arrives or is somehow redirected to "http://example.com/index.php" or whatever it takes to make this work without creating a separate directory for every user? Is this even possible?
Notes:
- The closest I can get to is "http://example.com/user/?john-smith", but that's still not good enough.
- I'm using Windows Server 2016 with PHP 7, but happy to hear solutions for any platform.
Okay, finally discovered the answer (sadly, no thanks to StackOverflow).
On a Windows Server it's actually pretty easy - IIS 7.0 and later contain a service called "URL Rewrite" (if it's not installed by default, it can be installed separately), which is a surprisingly convenient GUI for creating rules (using standard regular expressions) that convert "incomprehensible" URLs into something that the server understands.
In my example case, I just create a conversion rule from "^user/([a-z-]+)" to "?user={R:1}" and that's it!
There is a similar thing on Apache servers called "mod_rewrite", but you might need to write rules in XML manually for it.

Storage of user data

When looking at how websites such as Facebook stores profile images, the URLs seem to use randomly generated value. For example, Google's Facebook page's profile picture page has the following URL:
https://scontent-lhr3-1.xx.fbcdn.net/hprofile-xft1/v/t1.0-1/p160x160/11990418_442606765926870_215300303224956260_n.png?oh=28cb5dd4717b7174eed44ca5279a2e37&oe=579938A8
However why not just organise it like so:
https://scontent-lhr3-1.xx.fbcdn.net/{{ profile_id }}/50x50.png
Clearly this would be much easier in terms of storage and simplicity. Am I missing something? Thanks.
Companies like Facebook have fairly intense CDNs. They may look like randomly generated urls but they aren't, each individual route is on purpose and programed to be handled in that manner.
They aren't after simplicity of storage like you would be if you were just using a FTP to connect to a basic marketing website server. While you may put all your images in a /images folder, Facebook is much too complex for this. Dozens of different types of applications accessing hundreds if not thousands of CDNs and servers world wide.
If you ever build a web app, such as a Ruby on Rails app, and you work with a services such as AWS (Amazon Web Services) you'll also encounter what seems like nonsensical urls. But it's all part of the fast delivery network provided within the architecture. Every time you "push" your app up to the server new urls are generated for each unique resource automatically, css files, JavaScript files, image files, etc all dynamically created. You don't have to type in each of these unique urls individually each time you publish the app, the code simply knows where to look for those as a part of the publishing process.
Example: you tell the web app to look for
//= require jquery
and it returns you http://example.com/assets/jquery-eb3e278249152b5b5d5170b73d9dbf52.js?body=1 in your header.
It doesn't matter that the url is more complex than it should be, the application recognizes it, and that's all that matters.
Simply put, I think it can boil down to two main reasons: Security and Cache:
Security - Adding these long unpredictable hashes prevent others from guessing photo URLs and makes it pretty hard to download photos you aren't supposed to.
Consider what would happen if I could easily guess your profile photo URL and download it, even when you explicitly chose to share it only with friends.
Cache - by adding "random" query params to each photo, you make sure each photo instance gets its own URL. Thus you can store the photo in browser's cache for a long time, knowing that whenever you replace it with a new one, the new photo will have a fresh URL and the browser won't keep showing you the old photo.
If you were to keep the same URL for each user's profile photo (e.g. https://scontent-lhr3-1.xx.fbcdn.net/{{ profile_id }}/50x50.png), and then upload a new photo, either one of these can happen:
If you stored the photo in browser's cache for a long time, the browser will keep showing you the cached version (as long as URL is the same, and cache hasn't expired, there's no need to re-download the image).
If, instead, you only keep the image in cache for short period of time, you end up hitting your server much more then actually needed, increasing the load and hurting performance.
I hope this clarifies it.
With your route scheme, how would you avoid strangers to access the pictures of a private account? The hash also prevent bots to downloads all the pictures.
I get your pain :-) I might not stay with describing how this problem could appear more, but rather let me speak of a solution. Well it is normal that in general code while dealing with hashed value or even base64ed value it seems likes mess to deal with, but with an identifier to explain along, it does not remain much!
I use to work in a company where we use to collate Facebook post, using Graph API get its Insights Object and extract information from it for easy passing around within UI and sending back to our Redis cache store; and once we defined a data-structure in TaffyDB how an object organization is going to look like, everything just made sense with its ability to query the useful finite from long junk looking stream of minified Javascript stream
Refer: http://www.taffydb.com/
The extra values in the URL are useful to:
Track access. This is like when a newspaper appends "&homepage" vs. "&email" to an article URL, so their system knows how a reader found the page.
Avoid abuse and control access. Imagine that a user loaded a small, popular pornographic image into a profile image. They could then hijack the CDN to be a free web host for their porn site. But that code is used internally by the CDN to limit the number of views.

detecting a change in the page including refresh

so i am working in a .tpl file meaning i am open to js, html and php answers. what i want to do is whenever a person refreshes the page, experience a change in the url or exits the browser, my site would take an action based on this change of state. so basically, when they leave that specific page of mines in any way, i would call a function. the reason i want this is because i am saving this editable image on my site. but whenever they leave the page, i want the image the created to be autosaved.
this task splits into client-side and server-side parts. At client side you should bind to interesting browser events, triggering some background http requests to some service URLs of your website, this is probably JS. At the server side, you should provide corresponding reaction to these requests, which is probably PHP.
As long as these service URLs are to be called intermittently by various visitors, be sure to keep an eye on what request came from which client's window. PHP sessions should help you.
I'd propose to work this separately, first to get saving machinery working -- just bind everything to explicit big buttons at the page (page close, url change, etc), then replace each button with the binding to exact JS event. Keep in mind differencies among browsers.

Rails: How to separate static content and application but while maintaining a connection between the 2?

Ok this question might sound a bit weird, let me try to explain what I am trying to achieve here.
I need:
- some mostly static pages: home page, about us, etc. the usual suspects
- a full complex rails web app
The web app being the heart of the system will have a lot of stuff, including user authentication (with devise by the way). The application will have a standard navigation menu with possible actions changing depending on user status (login or not, admin or not, etc).
Until now, nothing out of the ordinary.
However for unrelated reason, I MUST have the entry point of the whole system be the home page that will be hosted on another server (ergh).
So now, since my home page and other static pages will be on server A and all the application will be on server B how can I maintain contact between the 2 ?
Meaning: keep my navigation menu dynamic even on my static pages, have a sign-in / sign-up form on my static server but registering an account on the "real" application server ?
They can share the same database, no pb there.
Any pointers on how to do this ? I would really like not to put some iframes on the static site...
Thanks !
Alex
For the signin/signup stuff, you can have your forms action going to B and redirecting to A.
To display the right stuff in the menus you can make a jsonp call(as Chris said) to fetch either the entire header or specific parts of the header that are dynamic.
If you are just looking to include the users name, you can also simply store their name in a cookie and then use javascript to display it in the header.
If there's no cookie display a link to login/signup.
edit: For the jsonp calls take a look at a javascript framework to make the call client side, I personally use jQuery http://api.jquery.com/jQuery.ajax (and look at the jsonp options).
Thinking out loud...
Can you dynamically build the menus using javascript/AJAX in the static code? Perhaps that could query server B (via jsonp) to determine the options...
Its going to have do some "funky" (tm) stuff to track whether there is a user session or not... and linking them...

Sticky notes associated with web page - how to?

I have this idea for a project. Associated with any web page, i want to create notes that will be saved locally in a database, the notes will be reloaded automatically from that database the next time i visit the same page.
Creating the note is easy, but i'm looking for how to link the notes to the web page url and how to keep aware of the active web page. Any idea?
(Note: i have come to this searching on the internet: http://webkit.org/demos/sticky-notes/ - this is part of WebKit Open source projects) - this is about what i'm looking for.
Thank.
Browserdependent probably. You'll have to have a plugin for every browser type.
IE might be doable via the COM interface, but that probably would require starting IE via a way you control. So that probably will have to be a plugin too.
For browser independence, there are quite a few challenges in this one. One way would be to implement a proxy server and watch for text/html content....this will work for most of the general cases, but not every case. Handling frames for instance... which resource is the "parent" and which is the "child"? Which one contains the sticky note? I think you would have to inject some client side javascript to keep track of things, and that might break some websites.
protonotes.com is a web service version of this. Not sure how they do it though.
Actually, Daniel H hit the nail on the head mate: http://www.protonotes.com
It does exactly as you want, in fact it gives you two options to store your data, the first is hosted, the second is your own mySQL db - protonotes pipes the data from the tack-on style notes to your own db, if you prefer. This means that you're not the only person who can see the notes - access is granted by a unique 'group' key.
I've just deployed protonotes as our main online review tool for two reasons, we can save our own data, and it lacks some features which I generally label "dubious" anyway.
It's simplicity is great, the only thing I'm aware of that could cause a prob is that it dumps a bunch of stuff in the global namespace - if that's a potential problem for you.
d

Resources