How to add a TLS listener to a network load balancer in AWS - amazon-elb

I'm attempting to add a TLS listener to a network load balancer. From https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/command-options-general.html#command-options-general-elbv2-listener this shouldn't be possible, since the SSLCertificateArns and SSLPolicy params note "This option is only applicable to environments with an Application Load Balancer".
This is the policy (some details masked XXX):
option_settings:
aws:elbv2:listener:443:
DefaultProcess: default
ListenerEnabled: 'true'
Protocol: TCP
SSLCertificateArns: arn:aws:acm:XXX
SSLPolicy: ELBSecurityPolicy-XXX
Sure, fine, I get Failed Environment update activity. Reason: Configuration validation exception: Invalid option value: 'null' (Namespace: 'aws:elbv2:listener:443', OptionName: 'SSLCertificateArns'): SSL options are not supported for Network Load Balancers. when I do this. Other AWS docs support this position, e.g. https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/environments-cfg-nlb.html#environments-cfg-nlb-intro.
However - I can add a TLS listener and specify the security policy and certificate via the AWS console. I also tried Protocol: TLS in option_settings, which is what I can use in the AWS console, but I get error Failed Environment update activity. Reason: Configuration validation exception: Invalid option value: 'TLS' (Namespace: 'aws:elbv2:listener:443', OptionName: 'Protocol'): Only TCP Protocols are supported for Network Load Balancers .. Is this simply something that the AWS console does and can't be set via option_settings, or am I misunderstanding something important?

Related

Not able to check if file exists on S3(Failed to open TCP connection to 169.254.169.254:80)

We are using paperclip gem to use S3 functionality(upload, fetch, check if file present).
We are not providing AWS keys in code, we are using role-based mechanism.
Things work fine on ECS-EC2 but break on ECS fargate but both have the same role and the same attach policies.
On fargate we are getting
Failed to open TCP connection to 169.254.169.254:80 (Invalid argument - connect(2) for "169.254.169.254" port 80)
Any ideas?.

ActiveMQ Artemis GUI Jolokia access in docker container

I'm running ActiveMQ Artemis inside of docker containers for our three environments (DEV/QA/PROD).
The management console typically runs on port 8161 and so I included this in the artemis create statement when I created the broker.
--http-host 0.0.0.0 --http-port 8161
So this causes the following two changes that I can see:
bootstrap.xml gets the host/port:
<web bind="http://0.0.0.0:8161" path="web">
<app url="redhat-branding" war="redhat-branding.war"/>
<app url="artemis-plugin" war="artemis-plugin.war"/>
<app url="dispatch-hawtio-console" war="dispatch-hawtio-console.war"/>
<app url="console" war="console.war"/>
</web>
jolokia-access.xml gets the host/port:
<allow-origin>*://0.0.0.0*</allow-origin>
I'm trying to access the ActiveMQ Artemis Hawtio management console from a remote computer, but the exposed docker ports are not 8161. They're the mapped ports 38161, 48161, & 58161.
So when I login to the management console, I get:
Operation unknown failed due to: java.lang.Exception : Origin http://10.0.20.2:58161 is not allowed to call this agent
Uncaught TypeError: Cannot read property 'apply' of undefined (http://10.0.20.2:58161/console/app/app.js:16:14127)
Uncaught TypeError: Cannot read property 'apply' of undefined (http://10.0.20.2:58161/console/app/app.js:16:14127)
...
I believe the problem here is that your jolokia-access.xml using this:
<allow-origin>*://0.0.0.0*</allow-origin>
However, you're attempting to access the console via http://10.0.20.2:58161 which isn't allowed based on your jolokia-access.xml. Therefore you need to change the jolokia-access.xml to allow the IP:port you're actually going to use to connect.
You can read more about the jolokia-access.xml in the Jolokia security documentation.
For clarity's sake, the meta-address 0.0.0.0 is basically the "no particular address" placeholder and in the context of binding a listener to a network interface it means the listener should bind/listen to all interfaces. However, in the context of <allow-origin> for Jolokia security it doesn't mean allow all origins. The <allow-origin> supports literal matches and wild-cards (as noted in the documentation linked above). Therefore, if 0.0.0.0 is specified it attempts to literally match 0.0.0.0. There is no way to disable Jolokia security from the create command. If you were to pass something like --http-host 10.0.20.* to the create command then 10.0.20.* would be used to bind the webserver in bootstrap.xml which would fail.
There is the option of using --relax-jolokia which will disable strict checking which may help your use-case.
Just you need to make change on the jolokia-access.xml file to edit cors :
<allow-origin>*://*</allow-origin>
For more information about this you can refert to : https://medium.com/#hasnat.saeed/setup-activemq-artemis-on-ubuntu-18-04-76bb4975308b

Using data flow with https on cloud foundry

I am trying to deploy a data flow server on Cloud foundry and create a simple app.
Only https end point could be exposed. I cannot enable https using this :
http://docs.spring.io/spring-cloud-dataflow/docs/current-SNAPSHOT/reference/htmlsingle/#configuration-security-enabling-https
As ssl is managed by cf. How do I make data flow server using https?
I have this error:
dataflow:>app list
Command failed org.springframework.web.client.ResourceAccessException: I/O error on GET request for "http://dataflow-server.run.aws-usw02-pr.ice.predix.io/apps": Connect to dataflow-server.run.aws-usw02-pr.ice.predix.io:80 [dataflow-server.run.aws-usw02-pr.ice.predix.io/54.201.89.124, dataflow-server.run.aws-usw02-pr.ice.predix.io/52.88.128.224] failed: Connection refused (Connection refused); nested exception is org.apache.http.conn.HttpHostConnectException: Connect to dataflow-server.run.aws-usw02-pr.ice.predix.io:80 [dataflow-server.run.aws-usw02-pr.ice.predix.io/54.201.89.124, dataflow-server.run.aws-usw02-pr.ice.predix.io/52.88.128.224] failed: Connection refused (Connection refused)
Thanks in advance.
Best Regards
as you already mentioned, you can not enable https at the container level inside cloudfoundry today. The traffic between the router and diego cell is not encrypted (unless you are using IPSEC).
So your dataflow server would not be configured with https, just deploy the server as it is. You should rely on your cloudfoundry install to have an open port at 443 on it's loadbalancer that forwards traffic to the router. Later CF incarnations support certificate placement at the router level.
Now, at the client (dataflow-shell) if you are using a valid certificate you don't need to do anything, but if you have a selfsigned certificate, you need to tell it to accept self-signed certificates, or skip validation all together.

Identity Server Host into IIS localhost and call from Client

Server Error in '/' Application.
IDX10803: Unable to create to obtain configuration from:
https://identityserver:444/identity/.well-known/openid-configuration'.
Description: An unhandled exception occurred during the execution of
the current web request. Please review the stack trace for more
information about the error and where it originated in the code.
Exception Details: System.InvalidOperationException: IDX10803: Unable
to create to obtain configuration from:
'https://identityserver:444/identity/.well-known/openid-configuration'.
All certificate install into local machine using MMC.
Identity Server Application host into IIS and call from client application. that time i facing this issue.
your port looks wrong in your authority. Please double check your port, port 444 is for Simple Network Paging Protocol (SNPP)
In your API's authorization middleware check to see if the correct port is on the authority option.

AWS Elastic Beanstalk Load Balancer settings for SSL

I have a Ruby On Rails app configured for SSL only, and set up the following listener on my elastic load balancer:
With this configuration, my site application does not resolve, and I don't understand why. If however, I change the instance protocol to HTTP everything works as expected.
Could someone explain why this is please?
When configuring an ELB to listen on HTTPS, you must upload a certificate to IAM and link it from the ELB.
Procedure is described in the documentation
When using AWS ElasticBeanstalk, you can also configure your ELB and SSL certificate from a config file located in your source's home/.ebextensions
Sample config file is :
- namespace: aws:elb:loadbalancer
option_name: LoadBalancerHTTPSPort
value: 443
- namespace: aws:elb:loadbalancer
option_name: SSLCertificateId
value: arn:aws:iam::012345678901:server-certificate/my_certificate_name
Have a look at the detailed documentation for possible options

Resources