I have a SharePoint site developed in MOSS 2007. My site is using SharePOint farm architecture which includes following servers in it
One Application server APP1
Two Database Server's mirrored (DB1,DB2)
Two Front End Server's (WFE1, WFE2)**
Recently we are receiving complaints from customer saying that when A user logging with his credentails and navigate to site, sub-sites in it. When he click on Home (Root site) Tab he certainly loose his active looged in session and he see the home page for another user with his access previllages.
Can some one tell me what is the root cause for this...?
Any help will be highly appriciated.
FYI: I have implemented caching to improve the performance of this site.
Thanks & Regards,
Sachin k
It seems to be the problem with setting of Load Balancer. Special care needs to be taken when SharePoint is load balanced under kerberos.
See:
http://sharepointspot.blogspot.com/2007/09/kerberos-load-balanced-web-sites.html
http://blogs.msdn.com/b/joelo/archive/2007/01/05/nlb-network-load-balancing-and-sharepoint-troubleshooting-and-configuration-tips.aspx
Specifically:
To manage session transfer:
Use ASP.NET SQL Session management so all sessions are recorded in a central location OR
Use affinity on the switch so that concurrent requests from a particular IP address within the same session go to the same target server
Related
I need to know the best practices for deploying a new version of an ASP.NET MVC application while users are still connected to it
Everytime one deploys the .dll that contains the models and controllers of the application, the application is rebooted. Also deploying the web.config (that references eventually new libraries) results in rebooting the application.
So, the question is: how do I update the application's dll or web.config without disconnecting the users from the site?
You want to use another session state option other than using in-proc so your users survive when the process recycles or system reboots.
InProc: In-Proc mode stores values in the memory of the ASP.NET worker process. Thus, this mode offers the fastest access to these values. However, when the ASP.NET worker process recycles, the state data is lost.
See ASP.NET Session State Options for more ASP.NET options and mentions of other third party session state providers.
This question also deals with possible deployment scenarios to help with the websites under load and slow app times after a pool recycle: How are people solving app pool recycle issues on deployment with large apps?
Ideally you want to be as stateless as you can, and stay away from session. Perhaps you can use a cookie for tracking the current user via forms auth for example. But you must stay away from in-proc by using distributed cache/session provider so users won't lose session state on app pool recycles.
I think the best is to deploy a new site for new sessions, and mantain existing sessions in the old one.
I feel that "The blue green deployment strategy" article linked below can be hacked with a few changes to do that (Disallow New Connections instead of issue a "drain", using sticky sessions).
https://kevinareed.com/2015/11/07/how-to-deploy-anything-in-iis-with-zero-downtime-on-a-single-server/
I have a site based on asp.net mvc on windows hosting. Now I need one more site based on php linux. I authonticate a user on windows site and let him upload some information. Now I want this information to go to linux based site. This information could be audio/video or images.
How would i make sure that he can only load to linux server when he is logged into windows based site.
So basically I am thinking before the linux based save something, it should verify that the user is logged into the windows site. What about the logout process.
Help will be appreciated.
Regards
Parminder
If these two services use the same domain, your windows site can save some value in the cookie, and the linux site will identify whether user is authenticated base on these values. However it will only work if the two services are under same domain name (can be different port).
If not, I think Single-Sign On is advisable for security reasons, and OpenID or OAuth is recommended.
We are building a new web application that needs to run inside the SP Context for authentication. Unfortunately the person logged into the machine is not necessarily the person logged into SharePoint. I could not figure out a way to detect who was logged into SharePoint from an application outside of SharePoint. So, the solution is to deploy the application to the LAYOUTS folder within the 12 hive. This works great in that I can use a custom master page, go crazy with fancy user controls, AND be within the SP Context. I also locked down access to the page by detecting which web app the user was on so no one can access it from a different SP web app.
The problem is the URL. It is ugly. I want the url to be something like this: www.sitename.com/ instead of www.sitename.com/_layouts/appname/
I tried created creating a new web site within IIS that points directly to the app in the LAYOUTS folder. That failed because I was no longer within the SP Context.
I also tried an IIS redirect which worked, but the URL still switched over to the ugly URL.
Does anyone have any ideas for this?
My orginal problem was not being able to detect the currently logged in user for SharePoint outside of SP, so if you have a solution to that problem, that would be great too.
Your best option is to rewrite the URLs and HTML with a proxy. Apache with the mod_rewrite and mod_html_rewrite options are an option. However this kind of setup is not trivial.
You can use URL rewriting in IIS.
I have an ASP.NET MVC web application that is hosted on a shared hosting account. The site has no issues during regular usage. However the nature of the business is such that for one week out of the month we have very very high traffic. During these high traffic peak load times, my application has several "Service Unavailable".
One of the possible solutions I am looking at is to spin up a Windows Azure web role during peak traffic week and spin it down again after the week is up. (I know exactly when the load is going to be high) Right now, we don't have enough revenue to justify moving the site permanently to the cloud.
My questions is how to I handle DNS. I would like the move to Azure and back to hosted service to be seamless to the user. The user should be able to type my normal URL and go to my hosted site during off peak weeks and to the cloud app during peak week. My guess is to add some kind of CNAME record to the DNS server but I have no idea how to go about doing this. Anybody know of any resources on how to update the DNS so this scenario would work?
Yup, a CNAME record sounds to me like the way to go. See http://blog.smarx.com/posts/custom-domain-names-in-windows-azure. (Sorry, one of the images looks broken... I'll try to patch that up.)
The scheme would be: have www.foo.com point to your current app instance, and then change it to point to something.cloudapp.net when that week comes up... then switch it back after the rush is over.
Is it possible to just have the MVC site hosted on another web hosting account permanently? It doesn't necessarily have to be an Azure account, does it? Is the site written in MVC1 or MVC2?
If your shared hosting server cannot handle the peak then 1 instance of the web worker role in Azure probably won't work either.
I would try something else: keep your asp.net code on that server but move all the static content somewhere else (for example the Azure CDN) using a subdomain. If you use Jquery and serve the file from your website then you can change the link to either Google's CDN or Microsoft's CDN already for free.
I am using SharePoint Server 2007 with collaboration portal template on Windows Server 2008. When I use the following function from Central Administration from Application Management -> search -> Manage Search Service, I met with the following error message, any ideas what is wrong?
The search service is currently offline. Visit the Services on Server page in SharePoint Central Administration to verify whether the service is enabled. This might also be because an indexer move is in progress.
thanks in advance,
George
This can happen for a variety of reasons. e.g. Search is infact moving/setting up the indexer, patch levels are different in the farm, etc. Rather than attempt to list all and their fixes, please see the links below:
http://msmvps.com/blogs/shane/archive/2009/04/13/fixing-moss-search.aspx
http://www.dotnetmafia.com/blogs/dotnettipoftheday/archive/2009/02/24/the-search-service-is-currently-offline-visit-the-services-on-server-page-in-sharepoint-central-administration-to-verify-whether-the-service-is-enabled-this-might-also-be-because-an-indexer-move-is-in-progress.aspx
I would suggest to start with the Event Logs on your Indexer. Also cross check your event logs with the Query servers.
Typically, if something is wrong, it would be logged there (or the ULS logs). But in this case start with the Event VWR... and reset search service on the indexer as well (using Service.msc)