How to set docker lambda function name - docker

I deployed a simple python lambda based on the python 3.8 docker image (amazon/aws-lambda-python:3.8)
I can successfully invoke it locally by using curl, like this (returns a 200OK and valid results):
curl -XPOST "http://localhost:9000/2015-03-31/functions/function/invocations" -d '{"Hi": "abc"}'
That's great, but to minimise differences between environments, I'd like to be able to call it from Java code using the same name as it would have in production. The URL above refers to the function as function.
Is there a way to bake the function name into the lambda docker image?

The url used for local testing is how the interal AWS components would communicate. eg: if you are using API gateway enable API gateway logs and you would notice this url in logs when API gateway invokes the lambda
When deployed in AWS you can call this function in the same way you call any non containerized lambda function.

Related

Getting oAuth Token using MSAL PublicClientApplication acquire_token_interactive method from Databricks is not working : InteractiveBrowserCredential

I am trying to get oAuth2.0 token to the protected resource using InteractiveBrowserCredential flow.
This is working from my local jupyter notebook however when i am trying to run it from Databricks notebook, it is unable to open browser (as Databricks cluster has no browser installed) and giving me below message
Found no browser in current environment. If this program is being run inside a container which has access to host network (i.e. started by `docker run --net=host -it ...`), you can use browser on host to visit the following link. Otherwise, this auth attempt would either timeout (current timeout setting is None) or be aborted by CTRL+C. Auth URI: https://login.microsoftonline.com/{tenant_id}/oauth2/v2.0/authorize?client_id={client_id}&response_type=code&redirect_uri=http%3A%2F%2Flocalhost%3A44093&scope={resource_id}%2Fuser_impersonation+offline_access+openid+profile&state=EvgdkFcNZTuJG&code_challenge=KR8zwfjhkuKYTGSlbaYAJNLVjXZHiE&code_challenge_method=S256&nonce=33a1a12813342535455f398GHATf9c2cf21a8&client_info=1
I am trying to find out if there a way i can make it work, (by somehow using public redirect_uri to the Databricks cluster and driver node or in similar way). I can alternatively use device_code flow (it is working) however i want to see if i can by-pass one extra step of entering device code and directly authenticate using browser.
Please find the sample code i am using now below
import msal
app = msal.PublicClientApplication(self.CLIENT_ID, authority=self.AUTHORITY,token_cache= msal.TokenCache())
result = app.acquire_token_interactive(scopes=self.SCOPE)

Using the aws-cdk-local package and Localstack to test Lambda and APIGateway

Could someone tell me how to do the following.
I have created a Cloud Development Kit app which has an API Gateway and a Lambda function
I want to use aws-cdk-local package and Localstack to test this locally
I have installed everything correctly and I can deploy my CDK app to Localstack
How do I get the endpoint to test the APIGateway. The endpoints I see in the console after using cdklocal deploy are not correct
Using something like http://localhost:4566/restapis/my-api-gateway-id/dev/ results in
<Error>
<Code>NoSuchBucket</Code>
<Message>The specified bucket does not exist</Message>
<BucketName>restapis</BucketName>
<RequestId>xxxxx-xxxx-xxxx-xxxx-6e8eEXAMPLE</RequestId>
</Error>
Any advice or comments on how to create the correct endpoint is most welcome.
For anyone else it appears the url below works
http://localhost:4566/restapis/restapi-id/local/_user_request_/
points port 4566 is the port my localstack runs on.
Use
aws --endpoint-url=http://localhost:4566 apigateway get-rest-apis
To get the restapi id(s)

Does the docker version of opennms have Rest API?

I am building a frontend-like for OpenNMS and chose to install OpenNMS in a docker container. I need to use the Rest API to further my project and when I try to send a request to
http://localhost:8980/opennms/rest, using the python requests library the return code is 404.
Does OpenNMS for docker not have Rest API or do I need to install it on my core system instead of docker.
P.S. This is my first time trying to do use the Rest API of an application.
imgur link : https://imgur.com/5UDPjRF
The URL you are calling is just the base URL for our rest resources. For Nodes as an example, you can test this with curl -u admin http://localhost:8980/opennms/rest/nodes. The REST API endpoints are described in the documentation here

No route registered for '/docker/hook'

I'm creating an Azure AppService based on a Docker image. The image is in Docker public registry, so I want the service to 'know' when there's a new version of the image (same tag).
I thought the WebHook under Continuous Deployment was to achieve that, but when I call it with curl I get the message from the subject.
I couldn't find the right doc... is that WebHook URL for what I think (hope) it is? is there a specific HTTP verb to use?
EDIT: I mean the WebHook URL found under Continuous Deployment in my Container Settings in Azure
I was stuck on this one for some time as well, until I realized that it requires POST HTTP request on that URL.
Here is an example of the CURL request that I have in my gitlab CI script
curl -X POST "https://\$$AZURE_DEPLOY_USER:$AZURE_DEPLOY_PASSWORD#$AZURE_KUDU_URL/docker/hook" -d -H
It does require to have set the following variables in the environment or you can replace it directly with your URL
$AZURE_DEPLOY_USER
$AZURE_DEPLOY_PASSWORD
$AZURE_KUDU_URL

Connecting to scality/s3 server between docker containers

We are using a python based solution which shall load and store files from S3. For developing and local testing we are using a vagrant environment with docker and docker-compose. We have two docker-compose defintions - one for the assisting backend services (mongo, restheart, redis and s3) and the other one containing the python based REST API exposing solution using the backend services.
When our "front-end" docker-compose group interacts with restheart this works fine (using the name of the restheart container as server host in http calls). When we are doing the same with scality/s3 server this does not work.
The interesting part is, that we have created a test suite for using the scality/s3 server from a python test suite running on the host (windows10) over the forwarded ports through vagrant to the docker container of scality/s3 server within the docker-compose group. We used the endpoint_url localhost and it works perfect.
In the error case (when frontend web service wants to write to S3) the "frontend" service always responds with:
botocore.exceptions.ClientError: An error occurred (InvalidURI) when calling the CreateBucket operation: Could not parse the specified URI. Check your restEndpoints configuration.
And the s3server always responds with http 400 and the message:
s3server | {"name":"S3","clientIP":"::ffff:172.20.0.7","clientPort":49404,"httpMethod":"PUT","httpURL":"/raw-data","time":1521306054614,"req_id":"e385aae3c04d99fc824d","level":"info","message":"received request","hostname":"cdc8a2f93d2f","pid":83}
s3server | {"name":"S3","bytesSent":233,"clientIP":"::ffff:172.20.0.7","clientPort":49404,"httpMethod":"PUT","httpURL":"/raw-data","httpCode":400,"time":1521306054639,"req_id":"e385aae3c04d99fc824d","elapsed_ms":25.907569,"level":"info","message":"responded with error XML","hostname":"cdc8a2f93d2f","pid":83}
We are calling the scality with this boto3 code:
s3 = boto3.resource('s3',
aws_access_key_id='accessKey1',
aws_secret_access_key='verySecretKey1',
endpoint_url='http://s3server:8000')
s3_client = boto3.client('s3',
aws_access_key_id='accessKey1',
aws_secret_access_key='verySecretKey1',
endpoint_url='http://s3server:8000')
s3.create_bucket(Bucket='raw-data') # here the exception comes
bucket = self.s3.Bucket('raw-data')
This issue is quite common. In your config.json file, which you mount in your Docker container, I assume, there is a restEndpoints section, where you must associate a domain name with a default region. What that means is your frontend domain name should be specified in there, matching a default region.
Do note that that default region does not prevent you from using other regions: it's just where your buckets will be created if you don't specify otherwise.
In the future, I'd recommend you open an issue directly on the Zenko Forum, as this is where most of the community and core developpers are.
Cheers,
Laure

Resources