Umbraco relations service and slave Umbraco instances - umbraco

This question relates to Umbraco, Umbraco slave instances and the Umbraco Relation Service API.
We currently have a site designated as a master Umbraco instance which handles all updates to content. Our intention is to set up slave instances in different regions around the world behind a traffic manager to improve site performance globally. We've tested and this setup works fine.
As I understand it, the Umbraco slave instances with the "out of the box" slave configuration will not have their own databases, but instead poll a service on the master instance for content.
Question
We were planing on using the Umbraco relations API to relate multilingual content. I understand that this incurs a hit on the database, as our slave websites will not have a database, I presume this won't work.
Is my understanding of the situation correct?
Can the relations API be configured to work in this situation?
If not, is there a recommended alternate approach to managing related content in a way that will be supported by slave servers?
Thanks for any help you can provide, I'd be happy to answer any questions or provide any clarification necessary.

Related

queries regarding neo4j HA setup

Hi I am new to HA concepts and Neo4j HA. I have gone through the Neo4j Docs but i still have a couple of questions that come to my mind.
When using a php script to connect to Neo4j database via REST what ip should i use for the cluster. Is there a common ip for the cluster?
I ask this because if the master fails a new neo4j instance becomes the master. how should my script connect to the new master. Should i use third party software for pointing to the new master. can that happen automatically with neo4j through a common cluster ip. pardon me if my concepts are weak, just need some guidance.
How can i direct all reads and writes to the master only and use the slaves only for replication. Or is this the default setting. I see multiple read & multiple write scenarios so i am getting confused.
Is there any doc/material that explains further on setting up an Arbiter Instance or should i just configure 3 node Neo4j HA as explained in http://neo4j.com/docs/stable/ha-setup-tutorial.html and run the below command for one of the instance -
neo4j_home$ ./bin/neo4j-arbiter start
Any help is appreciated. Thanks!
Welcome to the community of Neo4j Users ;)
First I recommend you to look on neo4j-php-client, because it support Neo4j HA cluster and it could solve your question and problems. Instead of finding your own solutions.
Best practice is to use some kind of load balancing front of the Neo4j HA Cluster. Here is the great article about it: http://blog.armbruster-it.de/2015/08/neo4j-and-haproxy-some-best-practices-and-tricks/
You can do that on load balancer level based on HTTP methods (GET redirect to slaves; POST, PUT, DELETE redirect to master). But there is a problem with Cypher endpoint, because it uses only POST method. You can use additional HTTP header to distinguish between read and write request, but that logic must be in your application.
For start it's good enough to start with official documentation.
Resources
Neo4j HA cluster configuration (example)
Neo4j cluster and firewalls
As my friend MicTech mentioned, generally we use HAProxy as load balancer on top of Neo4j.
With the php client mentioned, you have a great configuration mechanism that allows to :
When using HA Proxy, define your read/write queries so it will automatically add a header to the http request. The header is configurable too.
When not using HAProxy, you can in the client setup, define all your neo4j instances and activate the High-Availibility extension (works only with cache enabled). So when the master is down, the client will automatically try to detect the new elected master and rewrite the connections configuration in the cache for further requests.
I tried to make the README as good as possible, please read it and open issues on the repository if there are things that are missing.
https://github.com/graphaware/neo4j-php-client

AzureWorkerHost get the uri after startup for Neo4jClient

I am trying to create a ASP.Net with neo4jclient project to be hosted on the Azure and am kind of unable to grasp how to do the following:
get hold of an neo4j rest endpoint address once the worker role has started. I think I am seeing a different address each time the emulator spins up a instance of worker role. I believe that i'll need this to create an client somewhat like this
neo4jClient = new GraphClient(new Uri("http ://localhost:7474/db/data"));
so any thoughts on how to get hold of the uri after the neo4j is deployed by AzureWorkerHost.
Also how is the graph database persisted on the blob store, in the example its always deploying a new instance of pristine db in the zip and updating, which is probably not correct. I am unable to understand where to configure this.
BTW I am using the Neo4j 2.0 M06 and when it runs in emulator, I get an endpoint somewhat like this http://127.255.0.1:20000 in the emulator log but i am unable to access it from my base machine.
any clue what might be going on here?
Thanks,
Kiran
AzureWorkerHost was a proof of concept that hasn't been touched in a year.
The GitHub readme says:
Just past alpha. Some known deficiencies still. Not quite beta.
You likely don't want to use it.
The preferred way of hosting on Azure these days seems to be IaaS approach inside a VM. (There's a preconfigured one in VM Depot, but that's a little old now too.)
Or, you could use a hosted endpoint from somebody like GrapheneDB.
To answer you question generally though, Azure manages all the endpoints. The worker roles says "hey, I need an endpoint to bind to!" and Azure works that out for it.
Then, you query this from the Web role by interrogating Microsoft.WindowsAzure.ServiceRuntime.RoleEnvironment.Roles.
You'll likely not want to use the AzureWorkerHost for a production scenario, as the instances in the deployed configuration will destroy your data when they are re-imaged.
Please review these slides that illustrate step-by-step deployment of a Windows Azure Virtual Machine image of Neo4j community edition.
http://de.slideshare.net/neo4j/neo4j-on-azure-step-by-step-22598695
A Neo4j 2.0 Community Virtual Machine image will be released with the official release build of Neo4j 2.0. If you plan to use more than 30GB of data storage, please be aware that the currently supported VM image in Windows Azure's image depot must be configured from console through remote SSH to Linux.
Continue with your development using http://localhost:7474/ and then setup the VM when you are ready for a staging or production build to be deployed.
Also you can use Heroku's free Neo4j database deployment but you must configure the basic authentication for your GraphClient connection in Neo4jClient.

Windows Azure + Asp.Net MVC + E-Commerce

I will developp and host an e-commerce website based on Asp.Net MVC4 (with several SQL Server Jobs).
I think use Azure in order to stay in Microsoft's world and avoid dedicated server management.
The package Web Site Shared with 1 site / 5Go SQL Server Database / 200Go Bandwidth is very interesting with the price based on 12 months.
But i don't know if this configuration is enough specially on the bandwidth.
What do you think of ? Did you use Azure with this type of application ?
Regards,
Guillaume.
If you want to develop E-Commerce application you will have to secure customers' sensitive data i.e. credit cards, address details etc. via secure connections (HTTPS; in many countries this is legal requirement). For that reason you will have to have SSL support.
Azure Website do not support SSL for custom domains. However, they support SSL for *.azurewebsites.net DNS name. So if your E-Commerce application DNS will be, say, my-ecom-app.azurewebsites.net then it's fine. Otherwise, I would not recommend Azure Website solution yet (I am sure SSL support for custom domains on Azure Website will be implemented).
Azure Cloud Services, on the other had, have full support of SSL for custom domains.
One of the really good websites to check Azure features and development roadmap is ScottGu's Blog
Azure Web Sites do not support SSL and I really don't know of any successful e-commerce site that does not run SSL for at least part of the website. If you really want to host your e-commerce on Azure today your only real choice is to run Virtual Machines for your web front end servers and use them for your DB or use SQL Azure.
We developed platform called Virto Commerce that does just that, MVC4 website hosted on Azure. There was also a need for SQL Jobs (indexing, payment processing, cart cleanups and so on) for which we used WorkerRole (instead of WebRole). WorkerRole and WebRole can actually be combined as part of a single deployment, however it is better to use a different instance for worker roles. In our case WorkerRole acted as a scheduler for multiple jobs defined in the database.
The challenge with WorkerRoles however is to make sure they scale well when new instances are added. So the workload needs to be distributed between multiple instances. This is done through the use of queues and blob locks, where each job is now split into two, one that schedules and partitions the work and the second that actually picks up the next partition and completes it.
Hope this helps!
PS: Virto Commerce is now available as an open source project on codeplex, go to http://virtocommerce.codeplex.com

How can design my applications to take advantage of Azure but prevent being locked in? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I am starting to migrate a couple of applications to Azure. Seems very straightforward and all I have to do is add an addiitonal Azure project to my solution and point it at my web project.
However what concerns me is that the team starts contaminating my applications with Azure specific functionality and relying on it. For example File uploads would be to Azure storage, azure caching etc. All well and good if we stay Azure and all clients are happy to use Azure. If we find a client isn't happy with Azure I'd like not be into a lot of work removing Azure functionality.
Just wondering if anyone had experience similar issues. I guess ideally I'd like to be able to have the project right publish to Azure and it uses Azure features, Azure Code etc and a second publish that just allows me to use IIS with non Azure features.
I assume I just need to be careful use interfaces correctly and DI etc. FileUpload vs AzureFileUpload. What about issues like resources coming from Azure storage for CSS/scripts etc rather than local? Should I look at using Azure Cloud Drive to simulate just standard NTFS environment
Is there any advice/patterns/practices? Has anyone experiences with similar? How about separating projects up and project structure etc? I guess a lot of is it just standard design. Just wondering how other people were approaching avoiding lock in with Azure.
There's a couple things you can do if you're concerned:
Stick to the core technologies like asp.net, ado.net, sql which also exist outside of Azure.
Abstract away code which uses Azure specific services.
For the first one, simply scan your code to ensure the runtime services don't include Azure namespaces.
But, to be a cloud like service and get it's benefit, you should look into adopting azure services.
For the second, you can create a cloud services layer abstracted away by an interface. Only that layer communicates with azure specific services. If you need to work outside of azure, you just need a plug-in for that layer.
If you want the app to be able to run on IIS or Azure, and those are your only 2 targets, my only advice is don't overdo the abstracting / interfaces. There are some differences that can be handled in web.config and WebRole.OnStart(), such as using cache as a session provider or logging diagnostics to table storage.
Some things it will help to create interfaces for, and then inject those interfaces in the config depending on your deploy target (web config transform is what we use). For example in IIS you might want to send an email in a separate thread, whereas in Azure you might use a worker role and a queue. You can set up a web.config transform with 1 implementation of ISendEmails for IIS, and different one for Azure.
Another thing you could do, depending on how much file data you have, is store files as blob columns in the db. I'm sure someone will tell me this isn't good for performance, and can get expensive with GB of file data in sql sever, and they have a point. It may be worth considering though if having IIS/Azure flexibility is of high concern.
I would design a cloud interface (as an abstraction of an actual cloud/network) that your applications can use, together with an Azure implementation of that interface.
Then later, when needed, you can make other cloud implementations that your apps can use using the same interface.
When designing the interface, the challenge is to include only generic methods that are relevant on every kind of cloud/network. So this will prevent using any Azure specific features directly by your applications, but that is exactly the purpose.

SharePoint Search with Network Load Balancing (NLB)

SharePoint MOSS 2007 on 64 bit OS and SQL. Added a new Web Front End to our farm, all sites seem to work fine - but now we've noticed that the search service has completely stopped working. It works if I change my host file to point to the original WFE, but if I use the NLB IP or the IP of the new WFE, it says "Unable to Connect to the Search Service.
Help.
As someone who was recently a sharepoint developer one of the biggest issues I have come across is load balanced environments. Does your alternate access mapping file contain the proper references?
The way we do it is to keep one WFE outside the NLB and have the indexer use that machine. Not only is this better for performance (the separate WFE serves the indexer only, regular traffic goes through the NLB. This way, indexing doesn't interfere with regular users visiting the site)
The other pro is that you circumvent issues like this.
P.S. This question DOES belong on serverfault though, voted to move..
I just setup a new medium farm MOSS 2007 x64 environment and ran into a few snags. This is what we ended up doing:
2 WFEs, 1 Index, 1 SQL - all running Windows 2008 Server
The 2 WFEs have an NLB cluster configured and host queries (but not indexing).
The Index is also a WFE, is NOT part of the cluster, hosts indexing (but not queries).
The Index had to have the loopbackcheck disabled and a hosts file entry setup to point the portal DNS name to 127.0.0.1. Without those settings it was generating errors. With these settings, it can index itself without affecting the portal performance while still being able to replicate its index to the query servers.
Hope that helps.

Resources