I am trying out the newly added container image support for AWS Lambda service.
I have built a custom image using python-alpine as the base image. I have used the same Dockerfile mentioned in the article above. I am also able to invoke it using below command :
curl -v -X POST http://localhost:9000/2015-03-31/functions/function/invocations -H 'Content-Type: application/json' -d '{}'
This part -d '{}' is actually what is passed as the event. When this function is behind an actual AWS API gateway it gets the below event
{"resource":"/","path":"/view","httpMethod":"POST","headers":{"Accept":"text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8","Accept-Encoding":"gzip, deflate, br","Accept-Language":"en-GB,en-US;q=0.8,en;q=0.6,zh-CN;q=0.4","cache-control":"max-age=0","CloudFront-Forwarded-Proto":"https","CloudFront-Is-Desktop-Viewer":"true","CloudFront-Is-Mobile-Viewer":"false","CloudFront-Is-SmartTV-Viewer":"false","CloudFront-Is-Tablet-Viewer":"false","CloudFront-Viewer-Country":"GB","content-type":"application/x-www-form-urlencoded","Host":"j3ap25j034.execute-api.eu-west-2.amazonaws.com","origin":"https://j3ap25j034.execute-api.eu-west-2.amazonaws.com","Referer":"https://j3ap25j034.execute-api.eu-west-2.amazonaws.com/dev/","upgrade-insecure-requests":"1","User-Agent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/59.0.3071.115 Safari/537.36","Via":"2.0 a3650115c5e21e2b5d133ce84464bea3.cloudfront.net (CloudFront)","X-Amz-Cf-Id":"0nDeiXnReyHYCkv8cc150MWCFCLFPbJoTs1mexDuKe2WJwK5ANgv2A==","X-Amzn-Trace-Id":"Root=1-597079de-75fec8453f6fd4812414a4cd","X-Forwarded-For":"50.129.117.14, 50.112.234.94","X-Forwarded-Port":"443","X-Forwarded-Proto":"https"},"queryStringParameters":null,"pathParameters":null,"stageVariables":null,"requestContext":{"path":"/dev/","accountId":"125002137610","resourceId":"qdolsr1yhk","stage":"dev","requestId":"0f2431a2-6d2f-11e7-b799-5152aa497861","identity":{"cognitoIdentityPoolId":null,"accountId":null,"cognitoIdentityId":null,"caller":null,"apiKey":"","sourceIp":"50.129.117.14","accessKey":null,"cognitoAuthenticationType":null,"cognitoAuthenticationProvider":null,"userArn":null,"userAgent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/59.0.3071.115 Safari/537.36","user":null},"resourcePath":"/","httpMethod":"POST","apiId":"j3azlsj0c4"},"body":{"msg":"update"},"isBase64Encoded":false}
My codebase depends on parameters like path, resource, isBase64Encoded, body, etc. What I am trying to achieve is, make the application portable (which can run on kubernetes). Is there a tool or a way that acts as an API gateway and passes an event like the one mentioned above while invoking this function ?
I have searched for tyk, traefik but none of them can generate an event like AWS API gateway and pass it to the function.
Not sure what you want to simulate the API gateway events, it's actually about the payload in json format which sent to lambda function.
eg.
aws lambda invoke --function-name docker-aws-cdk --region ap-southeast-1 --payload '{"action":"start", "target":"dev"}' outfile
Related
I deployed a simple python lambda based on the python 3.8 docker image (amazon/aws-lambda-python:3.8)
I can successfully invoke it locally by using curl, like this (returns a 200OK and valid results):
curl -XPOST "http://localhost:9000/2015-03-31/functions/function/invocations" -d '{"Hi": "abc"}'
That's great, but to minimise differences between environments, I'd like to be able to call it from Java code using the same name as it would have in production. The URL above refers to the function as function.
Is there a way to bake the function name into the lambda docker image?
The url used for local testing is how the interal AWS components would communicate. eg: if you are using API gateway enable API gateway logs and you would notice this url in logs when API gateway invokes the lambda
When deployed in AWS you can call this function in the same way you call any non containerized lambda function.
I selected bellow area and pressed command+C.
cloud shell capture
Following string is pasted from my clipboard.
You are viewing an offline list of runtimes. For up to dat
My browser is Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Safari/537.36
This is a known limitation.
Note: FireFox/IE may not support clipboard permissions properly.
For more details, refer "Limitations of Azure Cloud Shell".
My Rails application is deployed on Amazon elastic beanstalk using Docker. Web requests flow into an nginx web server that forwards them to the thin rails server residing in docker. Somewhere along the way there's a bottleneck. Once every 50 requests (or so) I see nginx reports serving time which is x40 higher than the time the rails thin server reports.
here's an example:
NGINX (7490ms):
146.185.56.206 - silo [18/Mar/2015:13:35:55 +0000] "GET /needs/117.json?iPhone HTTP/1.1" 200 2114 "-" "Mozilla/5.0 (Macintosh;
Intel Mac OS X 10_10_2) AppleWebKit/537.36 (KHTML, like Gecko)
Chrome/40.0.2214.115 Safari/537.36" 7.490 7.490 .
THIN (rails server): 171ms
2015-03-18 13:35:48.349 [INFO] method=GET path=/needs/117.json
format=json controller=needs action=show status=200
duration=171.96 view=109.73 db=29.20 time=2015-03-18 13:35:47
Can anyone supply some guidance how to troubleshoot such a situation? How do I find the source of the response time difference? I guess it could be either nginx or docker or thin or linux.
It sounds like one of the thin processes is under load doing a heavy task and nginx is still sending requests to the busy one. If there were a problem of queuing in Thin, the request would take a short time to be processed itself, but longer to get to the top of the queue. So first, check others requests before or around that request.
Second, if you are using upstream to serve (http://nginx.org/en/docs/http/ngx_http_upstream_module.html), apparently you could get among others $upstream_response_time and try to log it.
Third, you could also try to reproduce a similar setup in dev/qa and try a stress test. If you manage to reproduce it consistently, you could see number of request on each queue (i.e. http://comments.gmane.org/gmane.comp.lang.ruby.unicorn.general/848).
I am trying to use fiddler on IPad by connecting it to PC by wireless connection so I could use auto-responder to 'mock' some data.
Tutorial tells me that I should use my machine IP to connect IPad and check if it works, my problem is that I have multiple IP's (or at least it seems that I do).
One candidate would be 192.168.1.23 another 192.168.1.41
How to check which of IP addresses is localhost, or they both are pointing to same place?
This is just 'an' answer not 'the' answer, but this worked for me.
First I navigated to http://192.168.1.23:8888/ and got
Fiddler Echo Service
GET / HTTP/1.1
Host: 192.168.1.23:8888
Proxy-Connection: keep-alive
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
User-Agent: Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/35.0.1916.153 Safari/537.36
Accept-Encoding: gzip,deflate,sdch
Accept-Language: en-GB,en-US;q=0.8,en;q=0.6
This page returned a HTTP/200 response
Originating Process Information: chrome:10568
To configure Fiddler as a reverse proxy instead of seeing this page, see Reverse Proxy Setup
You can download the FiddlerRoot certificate
I then navigated to http://192.168.1.41:8888/
and got same exact response.
I believe it is safe to assume that both IP addresses are pointing to same place. Not sure if this is always the case, but it is true in mine.
It would appear that I am using multihoming.
On Tika's website it says (concerning tika-app-1.2.jar) it can be used in server mode. Does anyone know how to send documents and receive parsed text from this server once it is running?
Tika supports two "server" modes. The simpler and original is the --server flag of Tika-App. The more functional, but also more recent is the JAX-RS JSR-311 server component, which is an additional jar.
The Tika-App Network Server is very simple to use. Simply start Tika-App with the --server flag, and a --port ### flag telling it what port to listen on. Then, connect to that port and send it a single file. You'll get back the html version. NetCat works well for this, something like java -jar tika-app.jar --server --port 12345 followed by nc 127.0.0.1 12345 < MyFileToExtract will get you back the html
The JAX-RS JSR-311 server component supports a few different urls, for things like metadata, plain text etc. You start the server with java -jar tika-server.jar, then do HTTP put calls to the appropriate url with your input document and you'll get the resource back. There are loads of details and examples (including using curl for testing) on the wiki page
The Tika App Network Server is fairly simple, only supports one mode (extract to HTML), and is generally used for testing / demos / prototyping / etc. The Tika JAXRS Server is a fully RESTful service which talks HTTP, and exposes a wide range of Tika's modes. It's the generally recommended way these days to interface with Tika over the network, and/or from non-Java stacks.
Just adding to #Gagravarr's great answer.
When talking about Tika in server mode, it is important to differentiate between two versions which can otherwise cause confusion:
tika-app.jar has the --server --port 9998 options to start a simple server
tika-server.jar is a separate component using JAX-RS
The first option only provides text extraction and returns the content as HTML. Most likely, what you really want is the second option, which is a RESTful service exposing many more of Tika's features.
You can simply download the tika-server.jar from the Tika project site. Start the server using
java -jar tika-server-x.x.jar -h 0.0.0.0
The -h 0.0.0.0 (host) option makes the server listen for any incoming requests, otherwise without it it would only listen for requests from localhost. You can also add the -p option to change the port, otherwise it defaults to 9998.
Then, once the server has started you can simply access it using your browser. It will list all available endpoints.
Finally to extract meta data from a file you can use cURL like this:
curl -T testWORD.doc http://example.com:9998/meta
Returns the meta data as key/value pairs one per line. You can also have Tika return the results as JSON by adding the proper accept header:
curl -H "Accept: application/json" -T testWORD.doc http://example.com:9998/meta
[Update 2015-01-19] Previously the comment said that tika-server.jar is not available as download. Fixed that since it actually does exist as a binary download.
To enhance Gagravarr perfect answer:
If your document is got from a WEB server => curl -u
"http://myserver-domain/*path-to-doc*/doc-name.extension" | nc
127.0.0.1 12345
And it is even better if the document is protected by password => curl -u
login:*password*
"http://myserver-domain/*path-to-doc*/doc-name.extension" | nc
127.0.0.1 12345