Rackspace + Google Cloud SQL - connection

Im getting "ERROR 2013 (HY000): Lost connection to MySQL server at 'reading initial communication packet', system error: 0" trying to connect from rackspace to Google Cloud SQL. It just started to issue this error on 2014-07-14 arround 15:49.
My Rackspace server ip is authorized on Cloud SQL admin interface, also, I'm connecting from other ips outside from Rackspace infra and its all seems ok.
Another interesting point is that I don't receive the error every time, just arround 2/3 of the times.

Looks like a potentially known issue: https://groups.google.com/forum/#!topic/google-cloud-sql-announce/WIQ2g13aItI
"We are currently experiencing an issue with Google Cloud SQL and some users are experiencing loss of connectivity from Google Compute Engine to Cloud SQL instances. For everyone who is affected, we apologize for any inconvenience you may be experiencing."
You're not on Compute Engine, but it's possible that you're getting hit by the same problem.

Related

Azure Equivalent of Resource Group Local Host

I've had a little dig through azure documentation but couldn't find a definitive answer.
I have an app service and an azure db sitting in the same resource group, and I am finding the site takes a long time to connect and get responses back from the database only in the hosted environment.
Is it possible to specify a localhost equivalent as they are in the same resource group, and would this make things any quicker?
Resource Group does not have any impact on the connectivity or latency of the application and the database. It is just to group the Azure resources together based on a Project/Envrionment.
There is no equivalent for resourcegroup or even appservice unless if you want to run your application in IIS or any other server.
If you really want to see what is causing the connectivity issue, i will recommend you to monitor the request and response using Azure Monitor.
I think you need to understand the cloud concepts first before trying out anything.

Web API sometime working some time failed in Windows Azure "NetworkError": 500 Internal Server Error

We Have Deployed our first application on Windows azure with Windows SQL Azure Database. In my application we facing sometime to execute Web services to complete implementation.
We Configured all required setting and web site working properly but some time User Registration failed sometime not with same valid input. Please help me I am in new in Windows Azure.
See Error and Success
Error Image Link http://tourneypick.com/Upload/2015-09-10%2017_46_06-Firebug%20-%20register%20_%20Application.png
Success on Next click Image Link http://tourneypick.com/Upload/2015_09_10_17_45_27_Firebug_register_Application.png
This working on staging server properly.
After long effort I analyse that: My application was hosted in West US Region in Azure Cloud and database was in Central US region that's why system take too much time to load data. After change database region to West US means Application and database on same region. Now execution speed become fast and no 500 error.
Issue might be connection time out

FQDN in SQL connection string

We have an azure website and an azure VM
our SQL instance is on the VM.
I am trying to craft a connection string that will allow the azure site to see the SQL box
using the FQDN doesn't seem to work
any help would be greatly appreciated
Thanks
You will have to (not so wide) open a port for the SQL Server on VM. You can do this by setting an Endpoint. The good thing is that an Endpoint has a Public port (this is what Internet sees) and a Private Port (this is where the connection goes on the VM itself). Thus easily masking the default port 1433. My personal advise is that you NEVER open public port 1433 for your Server. Even in that scenario, I would advice you to use ACL on the Endpoint to only allow connections from Azure web sites in the DataCenter your web site is deployed. As stated in the last referred article, you shall not assume that traffic originating from Azure DataCentres is trustworthy, but at least you limit the attack surface for your SQL Server.
You may also evaluate using Hybrid Connections with a VM, but I never tried it.
Another side of the story is that you may want to consider using SQL Azure (sorry, Azure SQL Database) instead of maintaining own SQL Server. Then your connection will be securely established without a lot of hassle.

AzureWorkerHost get the uri after startup for Neo4jClient

I am trying to create a ASP.Net with neo4jclient project to be hosted on the Azure and am kind of unable to grasp how to do the following:
get hold of an neo4j rest endpoint address once the worker role has started. I think I am seeing a different address each time the emulator spins up a instance of worker role. I believe that i'll need this to create an client somewhat like this
neo4jClient = new GraphClient(new Uri("http ://localhost:7474/db/data"));
so any thoughts on how to get hold of the uri after the neo4j is deployed by AzureWorkerHost.
Also how is the graph database persisted on the blob store, in the example its always deploying a new instance of pristine db in the zip and updating, which is probably not correct. I am unable to understand where to configure this.
BTW I am using the Neo4j 2.0 M06 and when it runs in emulator, I get an endpoint somewhat like this http://127.255.0.1:20000 in the emulator log but i am unable to access it from my base machine.
any clue what might be going on here?
Thanks,
Kiran
AzureWorkerHost was a proof of concept that hasn't been touched in a year.
The GitHub readme says:
Just past alpha. Some known deficiencies still. Not quite beta.
You likely don't want to use it.
The preferred way of hosting on Azure these days seems to be IaaS approach inside a VM. (There's a preconfigured one in VM Depot, but that's a little old now too.)
Or, you could use a hosted endpoint from somebody like GrapheneDB.
To answer you question generally though, Azure manages all the endpoints. The worker roles says "hey, I need an endpoint to bind to!" and Azure works that out for it.
Then, you query this from the Web role by interrogating Microsoft.WindowsAzure.ServiceRuntime.RoleEnvironment.Roles.
You'll likely not want to use the AzureWorkerHost for a production scenario, as the instances in the deployed configuration will destroy your data when they are re-imaged.
Please review these slides that illustrate step-by-step deployment of a Windows Azure Virtual Machine image of Neo4j community edition.
http://de.slideshare.net/neo4j/neo4j-on-azure-step-by-step-22598695
A Neo4j 2.0 Community Virtual Machine image will be released with the official release build of Neo4j 2.0. If you plan to use more than 30GB of data storage, please be aware that the currently supported VM image in Windows Azure's image depot must be configured from console through remote SSH to Linux.
Continue with your development using http://localhost:7474/ and then setup the VM when you are ready for a staging or production build to be deployed.
Also you can use Heroku's free Neo4j database deployment but you must configure the basic authentication for your GraphClient connection in Neo4jClient.

SignalR + Redis on local computer

I've downloaded the last Signalr.Redis package (v0.1) and I've compiled the last Redis source code (2.4.26).
I tried to run Redis on my local pc (server and client work well) but when I start SignalR with Redis as a message broadcaster, it seems that signalr wants to start multiple connection to server (same server=localhost but multiple port number).
I know that Redis integration with SignalR is new and perphaps buggy, but is it possible to work with redis+signalr on local machine or is not a supported scenario?
Thanks.
SignalR will attempt to make a variety of connections to the server in order to keep an open connection. For most browsers it ends up long polling the server (which results in multiple requests regardless). What I ended up doing was using allowing SignalR to connect in a normal fashion to my MVC app and then call actions on my controllers which in turn communicated with Redis. This gives me the added benefit of being able to perform business logic in between. Not sure I answered your question, but I just wanted to share how its worked for me.

Resources