Azure container instance is not accessible using URL via browser - docker

I have created a new Container instance in Azure. Below are the steps.
Step:1- I created a new Cognitive Services (A Language Service) and used its "Key" and "Endpoint" value inside Container Instance
Step:2- I created a new Container Instance, and provide it all the required information as mentioned in the below article.
https://learn.microsoft.com/en-us/azure/container-instances/container-instances-quickstart-portal
but I changed the PORT 80 to "5001" and Image "mcr.microsoft.com/azure-cognitive-services/textanalytics/healthcare:latest".
Below are env variable I used
{
"name": "Eula",
"value": "accept"
},
{
"name": "RAI_TERMS",
"value": "accept"
},
{
"name": "Billing",
"value": "XXXXXXXXXXXXXXXXXXXXXXXXXXX"
},
{
"name": "ApiKey",
"value": "4a46537f51f64765864cabc20318bdcc"
},
{
"name": "enablelro",
"value": "true"
}
Finally it was created and deployed successfully. Now I tried to access it via below url
http://FQDN:5001/Demo/
FQDN--> qualified domain name is used in the url
its not accessible though instance is up and running properly.

It doesn't matter from which port you are trying to access. instead of using this url http://FQDN:5001/Demo/ would suggest you please use FDQN or IP address of container instance.
Using the complete FQDN when identifying something is the way it is supposed to be.
You can refer this thread same i have reprod related to your question. In which i have used FDQN to access the Conatainer Instance.

Related

How to add dynamic container in storage account [duplicate]

Using Logic Apps I am trying to copy blobs from one container into several separate dynamically created containers however there doesn't appear to be a "create container" action in Logic Apps.
I have tried using the "Create Blob" action with the desired container name specified as part of the "Blob Name" parameter however this fails with a 404 message.
{
"status": 404,
"message": "Specified container telemetery-30dfb0bd-73b0-42a3-8677-63bde2fd4b43 does not exist.\r\nclientRequestId: blahblahh-e60e-44e1-aec4-c32a21659257",
"error": {
"message": "Specified container telemetery-30dfb0bd-73b0-42a3-8677-63bde2fd4b43 does not exist."
},
"source": "blahblha-ne.azconn-ne-01.p.azurewebsites.net"
}
The original request is -
{
"method": "post",
"queries": {
"folderPath": "/",
"name": "/telemetery-30dfb0bd-73b0-42a3-8677-63bde2fd4b43/timeline,xml",
"queryParametersSingleEncoded": "True"
},
"path": "/datasets/default/files",
"host": {
"connection": {
"name": "/subscriptions/blahblah-6866-4c8c-b3f1-41039ad2b3eb/resourceGroups/RG-blahblahg/providers/Microsoft.Web/connections/azureblob"
}
},
"body": "file content"
}
IS there a way to create a blob container us Logic Apps?
According to the documentation, there's no "create container" operation:
https://learn.microsoft.com/en-us/connectors/azureblobconnector/
What you can do is write an Azure Function and chain it as part of your workflow in order to create the container:
https://learn.microsoft.com/en-us/azure/storage/blobs/storage-quickstart-blobs-dotnet#create-a-container
For now there is no action to create blob container, you could implement it with azure function like Thiago proposed. Suppose you could use the rest api to do it. The below test use the sas token to do it you could try other authorize way.

Ocelot swagger redirection issue

I am trying to implement Ocelot/Swagger/MMLib and .net microservices on my Windows 2019 server.
Everything is working fine, I can call each of the microservices correctly through the API gateway using postman, but I would like to display the swagger documentation as the API is going to be used by a third party.
If I use the ip address/port number I get the correct page displayed, with my microservice definitions. However if I reroute this to a physical url (eg https://siteaddress.com/path/swagger.index.html) I get the main swagger document but a 'Failed to load API definition' error, followed by 'Fetch error undefined /swagger/docs/v1/test.
The network page of my browser inspection gives a 'Http Error 404.0 Not Found'. The requested url is 'https://siteaddress.com:443/swagger/docs/v1/test'.
My ocelot.json is:
{
"Routes": [
{
"DownstreamPathTemplate": "/api/v1/TestSvc/{everything}",
"DownstreamScheme": "http",
"DownstreamHostAndPorts": [
{
"Host": "test.api",
"Port": "80"
}
],
"UpstreamPathTemplate": "/api/v1/TestSvc/{everything}",
"UpstreamHttpMethod": [ "POST" ],
"SwaggerKey": "test"
}
...
],
"SwaggerEndPoints": [
{
"Key": "test",
"Config": [
{
"Name": "Test API",
"Version": "v1",
"Url": "http://test.api:80/swagger/v1/swagger.json"
}
]
}
...
]
}
I have tried changing paths in ocelot.json and startup.cs. I can see nothing in the MMLib documentation regarding this scenario, which is surely common in deploying these sites.
Suggestion on where to go next appreciated.
Page with ipaddress and port number
Page with physical address and error message

chronograf: Not able to add default influxDB connection when using OAuth 2.0

I configured Chronograph to use generic OAuth 2.0 (using cloud foundry UAA). Users authentication works fine but the problem is that the default influxdb connection is not taken into consideration. In fact this configuration works:
chronograf --log-level="debug" --resources-path="/usr/share/chronograf/resources" --influxdb-url="http://influxDB.log.database:8086" --influxdb-username="usename" --influxdb-password="pass"
here is the content of /usr/share/chronograf/resources folder:
influxdb.src:
{
"id": "9999",
"name": "MyInfluxDB",
"username": "user1,
"password": "password1",
"url": "http://influxDB.log.database:8086",
"type": "influx",
"insecureSkipVerify": true,
"default": true,
"telegraf": "telegraf.autogen",
"organization": "Default"
}
Both connections are automatically created when chronoraf starts :
MyInfluxDB
http://influxDB.log.database:8086
but When I run chronograf with the following options (To use OAuth 2.0 and create an influxdb connection) :
export TOKEN_SECRET="token_secret"; export JWKS_URL="https://uaa/token_keys"; export PUBLIC_URL="http://chronograf:8888"; chronograf --log-level="debug" --resources-path="/usr/share/chronograf/resources" --generic-name="generic" --generic-client-id="id" --generic-client-secret="secret" --generic-scopes="openid" --generic-auth-url="https://uaa/oauth/authorize" --generic-token-url="https://uaa/oauth/token" --generic-api-url="https://uaa/userinfo"
The OAuth 2.0 works fine but once redirected to the chronograf dashboard I cannot see the connections and even when I created a connection manually and I log in I cannot found any connection that is created automatically on startup as wanted.
the field organization needs an id. The id for the Default orginization uses a lower case d. If you change your src file to,
{
"id": "9999",
"name": "MyInfluxDB",
"username": "user1,
"password": "password1",
"url": "http://influxDB.log.database:8086",
"type": "influx",
"insecureSkipVerify": true,
"default": true,
"telegraf": "telegraf.autogen",
"organization": "default"
}
It should now work.
you can see where the id is defined in their source here https://github.com/influxdata/chronograf/blob/9d8a49ba0ef8131cdce22d73718859f55f434db2/bolt/organizations.go#L20

Using characters such as ) in Bluemix runtime environment variables

I've got a ruby on rails app running on Bluemix. With this app I use a couple of services, one of which is Object Storage.
Logically, I want to put the credentials that I use for each environment (dev and prod) in the environment variables that you can specify in the runtime tab within Bluemix.
I want to put a password like this in there:
23aSeefae,,)ewFe
The runtime environment is not accepting the ) sign.
It says:
I have tried double quotes, single quotes and I have tried to escape the ) sign with a backslash.
Any help would be appreciated. Is there any way in which I can store my variables outside of my app and within the Bluemix environment instead?
PS: password is not a real password.
You have to bind (connect) your Object Service instance to your application in Bluemix so the VCAP_SERVICES environment variable is automatically created for you.
Here is an example of a VCAP_SERVICES env variable for an application binding Object Storage service instance (I have modified some data for security reasons):
{
"Object-Storage": [
{
"credentials": {
"auth_url": "https://identity.open.softlayer.com",
"project": "object_storage_a92583b3_329e_4ed8_8918_xxx",
"projectId": "7f1f5659d21340dfaa4568dxxxx",
"region": "dallas",
"userId": "abcdefghxxxxxxxxxxxxx",
"username": "admin_3ff9bf1e187e7fa02e28c96232dxxxxxxx",
"password": "BF_0_)s3#xxxXXbY^",
"domainId": "79fc08601744486abf930000000000",
"domainName": "761111",
"role": "admin"
},
"syslog_drain_url": null,
"label": "Object-Storage",
"provider": null,
"plan": "standard",
"name": "app-object-storage",
"tags": [
"storage",
"ibm_release",
"ibm_created"
]
}
]
}
You can then read this as JSON object in your ruby code, for example:
vcap_services = JSON.parse(ENV['VCAP_SERVICES'])
credentials = vcap_services["Object-Storage"][0]["credentials"]
password = credentials["password"]
I've gotten help from the Bluemix support as well now. This is by far most easy way to do what I want:
You can set environment variables through the Cloud Foundry command line interface.
cf set-env <APP_NAME> <ENV_VAR_NAME> <ENV_VAR_VALUE>
You will have to restage your app before you can use them.

'eventNotification' url is not being called

I am integrating DocuSign with our application. In testing phase we got a server with public IP and port (8086). On this port I have published my asp.net mvc web api and my url becomes:
http://XXX.XXX.XXX.XXX:8086/api/DocuSign/DocuSignDocumentStatus.
This url is going to be called from DocuSign whenever my document status changes.
Our network team has allowed access to following IPs for inbound access to this public IP/URL:
www.docusign.net 209.67.98.12
mailsea.docusign.net 209.67.98.59
NA2
na2.docusign.net 206.25.247.140
mailch.docusign.net 206.25.247.155
EU1
eu1.docusign.net 206.25.247.144
mailch.docusign.net 206.25.247.155
DAL/DR
demo.docusign.net 209.46.117.172
preview.docusign.net 209.46.117.174
mailda.docusign.net 209.46.117.17
I got this from the connect service reference pdf.
Whenever I am uploading the document from signing via docuSign web api and I am also sending this url with the request. I have checked - rechecked many times that my json request being created is correct (pasting it below):
{
"status": "sent",
"emailBlurb": "",
"emailSubject": "DocuSign API - Signature Request on Document Call",
"documents": [
{
"name": "someDocument.xls",
"documentId": "1"
}
],
"recipients": {
"signers": [
{
"recipientId": "1",
"email": "john.doe#someCompany.com",
"name": "John Doe",
"tabs": {
"signHereTabs": [
{
"xPosition": "100",
"yPosition": "100",
"documentId": "1",
"pageNumber": "1"
}
]
},
"routingOrder": "1"
}
],
"carbonCopies": [
{
"recipientId": "2",
"email": "some1.recipient#someCompany.com",
"name": "Some1 Recipient"
},
{
"recipientId": "3",
"email": "some2.recipient#someCompany.com",
"name": "Some2 Recipient"
},
{
"recipientId": "4",
"email": "some3.recipient#someCompany.com",
"name": "Some3 Recipient"
}
]
},
"eventNotification": {
"url": "http://XXX.XXX.XXX.XXX:8086/api/DocuSign/DocuSignDocumentStatus",
"loggingEnabled": true,
"requireAcknowledgement": true,
"includeDocuments": false,
"envelopeEvents": [
{
"envelopeEventStatusCode": "Completed"
},
{
"envelopeEventStatusCode": "Declined"
}
]
}
}
I am able successfully upload the document, emails are being sent successfully to all signers. Document being signed BUT DocuSign for some reason is not able to call my url with the status of the document. Please help. Let me know if you guys need any more information.
As specified by the Answer & subsequent Comment in this other question:
Regardless of whether you're using DocuSign Connect (configured at the account level) or using eventNotification (specified at the Envelope level), DocuSign will only publish messages to the "standard/default" ports:
In the DocuSign demo environment (demo.docusign.net) DocuSign Connect will publish to either port 80 (http) or port 443 (https). If the URL starts with "http", Connect will attempt to publish to port 80. If the URL starts with "https", Connect will attempt to publish to port 443.
In the DocuSign production environment (www.docusign.net), DocuSign Connect will only publish to port 443 (https). Publishing to port 80 (http) is not supported in the production environment -- the listener endpoint must be https.
Therefore, I'd suggest that you remove the port number from the URL that you've specified for "eventNotification", and ensure that your listener endpoint is located at either port 80 (for demo) or port 443 (for demo or prod).

Resources