I have an envoy service running that forwards all requests to abc.com, I want to add a lua script on envoy_on_request, that will parse the body and check if the body contains "test" string, if it does then instead of forwarding the request to abc.com, it should forward it to pqr.com.
I tried changing the host, but it doesn't work.
function envoy_on_request(handle)
local resultbody = handle:body(true)
local length = resultbody:length()
local result = resultbody:getBytes(0,length)
if string.match(result, "text") then
handle:headers():add("host", "pqr.com")
else
headers:add("host", "abc.com")
end
end
Is it possible to change the host dynamically?
Related
I need create a VPC Endpoint and an ALB to target the VPC Endpoint in CDK.
I found InterfaceVpcEndpoint can return vpcEndpointNetworkInterfaceIds attribute. So it seems the missing part is how to get private IP address from these ENI IDs in a CDK way.
I found CDK has a custom-resource package, its example shows I can use AwsCustomResource to call an AWS API (EC2/DescribeNetworkInterfaces) to get the IP Address.
I tried write a custom resource like below:
eni = AwsCustomResource(
self, 'DescribeNetworkInterfaces',
on_create=custom_resources.AwsSdkCall(
service='ec2',
action='describeNetworkInterfaces',
parameters= {
'NetworkInterfaceId.N': [eni_id]
},
physical_resource_id=str(time.time())
)
)
ip = eni.get_data('NetworkInterfaces.0.PrivateIpAddress')
and pass ip into elbv2.IPTarget.
But it seems I missed something, so it complains it needs a scalar not reference?
(.env) ➜ base-stack (master) ✔ cdk synth base --no-staging > template.yaml
jsii.errors.JavaScriptError:
Error: Expected Scalar, got {"$jsii.byref":"#aws-cdk/core.Reference#10015"}
at Object.deserialize (/Volumes/DATA/ci/aws/base-stack/.env/lib/python3.7/site-packages/jsii/_embedded/jsii/jsii-runtime.js:12047:23)
at Kernel._toSandbox (/Volumes/DATA/ci/aws/base-stack/.env/lib/python3.7/site-packages/jsii/_embedded/jsii/jsii-runtime.js:7031:61)
at /Volumes/DATA/ci/aws/base-stack/.env/lib/python3.7/site-packages/jsii/_embedded/jsii/jsii-runtime.js:7084:33
at Array.map (<anonymous>)
at Kernel._boxUnboxParameters (/Volumes/DATA/ci/aws/base-stack/.env/lib/python3.7/site-packages/jsii/_embedded/jsii/jsii-runtime.js:7084:19)
at Kernel
....
Thanks!
The AwsCustomResource.get_data-method return a Reference object, which now causes the issue. To get the CloudFormation token (!GetAtt "DescribeNetworkInterfaces"."NetworkInterfaces.0.PrivateIpAddress") the Reference.to_string method must be used explicitly.
This:
ip = eni.get_data('NetworkInterfaces.0.PrivateIpAddress')
Becomes:
ip = eni.get_data('NetworkInterfaces.0.PrivateIpAddress').to_string()
I'm working on a simple REST client for Docker Registry. For private registries, name resolution is pretty simple; if the image name is myregistry.io/myimage:latest, I look for https://myregistry.io/v2 and query the API there.
However, I notice that for docker hub, it doesn't quite work that way. If I'm looking for ubuntu, I can expand that to docker.io/ubuntu:latest, but https://docker.io/v2 returns a 307 redirect to https://www.docker.com/v2, which just returns HTML. The actual registry endpoint is at https://registry-1.docker.io/v2.
Is this just a hardcoded special case in the docker client, or is there some extra logic to looking up registry endpoints that I'm unaware of? If it is just a special case, is there more to it than always going to registry-1.docker.io instead of docker.io?
The central Docker registry is a well-known special case, similar to Maven central. You can see the defaults e.g. at https://github.com/docker/docker-ce/blob/ea449e9b10cebb259e1a43325587cd9a0e98d0ff/components/engine/registry/config.go#L42:
var (
// DefaultNamespace is the default namespace
DefaultNamespace = "docker.io"
// DefaultRegistryVersionHeader is the name of the default HTTP header
// that carries Registry version info
DefaultRegistryVersionHeader = "Docker-Distribution-Api-Version"
// IndexHostname is the index hostname
IndexHostname = "index.docker.io"
// IndexServer is used for user auth and image search
IndexServer = "https://" + IndexHostname + "/v1/"
// IndexName is the name of the index
IndexName = "docker.io"
// DefaultV2Registry is the URI of the default v2 registry
DefaultV2Registry = &url.URL{
Scheme: "https",
Host: "registry-1.docker.io",
}
)
I am using lua-websockets https://github.com/lipp/lua-websockets to try and get a web socket server running.
Using the copas example they provided:
local copas = require'copas'
local server = require'websocket'.server.copas.listen
{
port = 8080,
protocols = {
echo = function(ws)
while true do
local message = ws:receive()
if message then
ws:send(message)
else
ws:close()
return
end
end
end
}
}
copas.loop()
This works and starts listening on port 8080 and I am able to connect and get a echo response back.
The problem is when I try and integrate it with heka. I start heka and it starts the websocket server but hangs at Loading plugin. When it tries to "load" a plugin, it executes the lua script.
Now my question is, how do I run the websocket server and send a "success" to heka to let it continue start up. Simply this would be: if the websocket is listening on 8080 return to heka and say the lua script has been executed successfully.
Thanks in advance!
Don't call copas.loop() as it enters an indefinite loop that handles all copas socket interactions. You need to use copas.step() instead (see controlling copas section) and call it at the appropriate time from your heka code (this call will return false on timeout and true when it handles something). In a GUI application it may be called from an IDLE handler.
I'm trying to generate a list of all domain names and their corresponding IP addresses from a pcap file, using dpkt library available here
My code is mostly based on this
filename = raw_input('Type filename of pcap file (without extention): ')
path = 'c:/temp/PcapParser/' + filename + '.pcap'
f = open(path, 'rb')
pcap = dpkt.pcap.Reader(f)
for ts, buf in pcap:
#make sure we are dealing with IP traffic
try:
eth = dpkt.ethernet.Ethernet(buf)
except:
continue
if eth.type != 2048:
continue
#make sure we are dealing with UDP protocol
try:
ip = eth.data
except:
continue
if ip.p != 17:
continue
#filter on UDP assigned ports for DNS
try:
udp = ip.data
except:
continue
if udp.sport != 53 and udp.dport != 53:
continue
#make the dns object out of the udp data and
#check for it being a RR (answer) and for opcode QUERY
try:
dns = dpkt.dns.DNS(udp.data)
except:
continue
if dns.qr != dpkt.dns.DNS_R:
continue
if dns.opcode != dpkt.dns.DNS_QUERY:
continue
if dns.rcode != dpkt.dns.DNS_RCODE_NOERR:
continue
if len(dns.an) < 1:
continue
#process and print responses based on record type
for answer in dns.an:
if answer.type == 1: #DNS_A
print 'Domain Name: ', answer.name, '\tIP Address: ', socket.inet_ntoa(answer.rdata)
The problem is that answer.name is not good enough for me, because I need the original domain name requested, and not its' CNAME representation. For example, one of the original DNS requests was for www.paypal.com, but the CNAME representation of it is paypal.112.2o7.net.
I looked closely at the code and realized I'm actually extracting the information from the DNS Response (and not the query). Then I looked at the response packet in wireshark and saw that the original domain is there, under 'queries' and under 'answers', so my question is how can I extract it?
Thanks!
In order to acquire the name from the "Questions" section of the DNS response, via the dns.qd object, provided by dpkt.dns, all I needed to do was simply this:
for qname in dns.qd: print qname.name
how do you get cloudfoundry to assign a port? I am adding applications and I'd like to have a different port for each but VCAP_APP_PORT is not set. VCAP_APP_HOST is set but VCAP_APP_PORT is not.
Take a look at http://show-env.cloudfoundry.com/
It's a node application I knocked together just to output the environment and the request headers when you call it, the code looks like this;
var http = require('http');
var util = require('util');
http.createServer(function (req, res) {
res.writeHead(200, {'Content-Type': 'text/plain'});
res.write(util.inspect(process.env));
res.write("\n\n************\n\n");
res.end(util.inspect(req.headers));
}).listen(3000);
You can see the VCAP_APP_PORT in the output;
It would be handy to know which framework you are using, however, all these variables should be stored in the system environment so it shouldn't really matter.
Cloud Foundry will automatically assign each application instance an IP address and port and these values are accessible in the VCAP_* variable as Dan describes. You don't get to tell Cloud Foundry which port you prefer. Each instance of your app may receive a different IP address and port, so you should always interrogate the environment to find out what they are if you need that information.