some days ago I was able to set up one of my apps to be connected to one of my database instances from the google cloud run service configuration form. However lately I notice two things:
I'm no longer able to select the database instance my service is/will be connected to.
On a service that is connected using this method I no longer see the database connection name. at the bottom on the details panel.
Is this a symptom that the database connections feature will disappear from the Google CloudRun settings?.
This seems like a useful case to use the Cloud SDK to confirm your Cloud Run service is able to communicate with Cloud SQL. This will help confirm if you have a UI problem or something deeper. This is especially important given the documentation states that the Console instructions are not available yet.
Cloud Run supports Cloud SQL via gcloud management using a special flag to associate a Cloud SQL instance with an individual service.
Once this is done, the Cloud SQL instance will be available to the Cloud Run service until it is explicitly removed.
You can verify this connection is in place by looking at the service description:
gcloud beta run services describe [SERVICE-NAME]
in the response, you should see the property run.googleapis.com/cloudsql-instances inside spec.runLatest.configuration.revisionTemplate.metadata.annotations.
As long as that annotation is present and contains your Cloud SQL instance connection name, your service should be able to connect to the SQL instance as documented (assuming your service has authorization to connect to the Cloud SQL instance)
Related
I am currently writing tests for an existing project based on azure functions. The project uses signalr to send live update messages to the clients.
For my tests I am currently using a signalr instance, that is running in the cloud, but I need to replace it by a "local" instance on the system, that is running the tests, so i can be 100% sure, that the signalr message is coming from my test session.
Does anybody have an idea, how to get a signalr-server running in a docker container for my tests (i need a connection string i can provide for the azure functions app)?
I could not find anything online. I am sure I am not the only one, who wants to test if signalr messages are send correctly and i would prefer not to implement the signalr-server myself.
The bindings available in Azure Functions are for the Azure SignalR Service not SignalR itself, so there is no way unfortunately to test this locally.
You could simply instead just create a test Azure SignalR Service instance and use that instead.
I didn't find any way how to achieve this, so i created a repo with a small mock service for the SignalR service. I hope I am allowed to post such stuff here.
My repo and my docker image
Feel free to use / fork it. I am not sure if I will ever find any time to maintain it.
I've had a little dig through azure documentation but couldn't find a definitive answer.
I have an app service and an azure db sitting in the same resource group, and I am finding the site takes a long time to connect and get responses back from the database only in the hosted environment.
Is it possible to specify a localhost equivalent as they are in the same resource group, and would this make things any quicker?
Resource Group does not have any impact on the connectivity or latency of the application and the database. It is just to group the Azure resources together based on a Project/Envrionment.
There is no equivalent for resourcegroup or even appservice unless if you want to run your application in IIS or any other server.
If you really want to see what is causing the connectivity issue, i will recommend you to monitor the request and response using Azure Monitor.
I think you need to understand the cloud concepts first before trying out anything.
I'm new to Neo4j and am learning the Desktop application. I see that I can Add a Database (I can either Create a Local Graph or Connect to a Remote Graph). Creating a local graph obviously means creating a database on my computer, one with it's own bolt://... URL ID of some sort. If instead I Add a Remote Graph, does this imply that I can connect to another local graph stored on my laptop for example if I know its bolt id? I presume I can't but I want to make sure.
Next, if Remote implies stored in the cloud or served somehow, short of setting up a Neo4j instance on AWS or via another 3rd party does Neo4j come with its own easy way to setup a "remote" instance and where would this live? Does Neo have it's own cloud?
Remote Graph implies that there's a running Neo4j instance out there "somewhere" that we can connect to via the bolt URL, similar to how we would connect from a client application using a Neo4j driver (after all, Neo4j Desktop and Neo4j Browser are both client applications and connect via Neo4j drivers).
That might be a server instance somewhere set up by your company, or an instance running from your own laptop (not launched from Desktop), or maybe a Neo4j Aura instance you've set up yourself, or something on AWS or another cloud.
You can't administer remote instances, as bolt connections do not allow for starting/stopping Neo4j or other operations that require command-line access (though 4.0 security administration via the system db is supported).
For the most part though Desktop is agnostic of where or how the remote instance is set up, it just requires a bolt URL that can be used to connect to the instance or cluster.
I am new to SAP and development of SAP Fiori Application. I want to create a project consuming oData service.
I have created a SAP cloud platform Cockpit trial account and created a destination for my in house development gateway.
When I click on test connection, it shows host not found: 502.
I am not able to access oData connection url without saml2=disabled parameter, so tried with Basic authentication using my SAP user.
SAP is on Azure cloud. What am I missing here?
Based on your hardware landscape you might need to configure SAP Cloud Connector,
as documented here:
“The Cloud Connector serves as a link between SAP Cloud Platform applications and on-premise systems. It combines an easy setup with a clear configuration of the systems that are exposed to the SAP Cloud Platform. You can also control the resources available for the cloud applications in those systems. Thus, you can benefit from your existing assets without exposing the whole internal landscape.
The Cloud Connector runs as on-premise agent in a secured network and acts as a reverse invoke proxy between the on-premise network and SAP Cloud Platform. Due to its reverse invoke support, you don't need to configure the on-premise firewall to allow external access from the cloud to internal systems.”
And also look here:
“If your remote system resides behind a firewall (proxy type OnPremise), the following prerequisites must be met:
You have set up Cloud Connector and defined a virtual host mapping for the system.”
Also take a look here:
“Maintain Destinations for SAP Cloud Platform Connector
In the SAP Cloud Platform Cockpit, maintain destinations for each target system to enable communication via the SAP Cloud Platform Connector.
For on premise systems, make sure to select the Proxy Type OnPremise.”
I also faced a similar issue yesterday night. This generally has to do with some config problems.
For me, it was the Proxy settings. it was set to 'Internet'.
It had to be set to 'OnPremise'. After doing that, it got successfully connected.
So I have Windows Server 2016 TP5 and I'm playing around with the containers. I am able to do basic docker tasks fine. I'm trying to figure out how to containerize some of our IIS-hosted web applications.
Thing is, we usually use integrated authentication for the DB and use domain service accounts for the app pool. I currently don't have a test VM (that is in a domain) so I can't test if this will work inside a container.
If the host is joined to an AD domain, are its containers also part of the domain? Can I still run processes using domain accounts?
EDIT:
Also, if I specify the "USER" in the dockerfile, does this mean that my app pool will run using that (instead of the app pool identity)?
There are at least some scenarios where AD-integration in Docker container actually works:
You need to access network resources with AD credentials.
Run cmdkey /add:<network-resource-uri>[:port] /user:<ad-user> /pass:<pass> under local identity that needs this access
To apply the same trick to IIS apps without modifying AppPoolIdentity you'll need a simplest .ashx wrapper around cmdkey (Note: you'll have to call this wrapper in run-time, e.g.: during ENTRYPOINT, otherwise network credentials will be mapped to different local identity)
You need to run code under AD user
Impersonate using ADVAPI32 function LogonUser with LOGON32_LOGON_NEW_CREDENTIALS and LOGON32_PROVIDER_DEFAULT as suggested
You need transport layer network security, like when making RPC calls (e.g.: MSDTC) to an AD-based resources.
Set up gMSA by using any guide that suites you best. Note however, that gMSA requires Docker host to be in the domain.
Update: this answer is no longer relevant - was for 2016 TP5. AD support has been added in later releases
Original answer
Quick answer - no, containers are not supported as part of AD so you can't use AD accounts to run processes within a container or authenticate with it
This used to be mentioned on the MS Containers site but the original link now redirects.
Original wording (CTP 3 or 4?):
"Containers cannot join Active Directory domains, and cannot run services or applications as domain users, service accounts, or machine accounts."
I don't know if that will change in a later release.
Someone tried to hack around it but with no joy.
You can't join containers to a domain but if your app needs to authenticate then you can use managed service accounts. Saves you the hassle of having to deal with packaging passwords.
https://msdn.microsoft.com/en-us/virtualization/windowscontainers/management/manage_serviceaccounts