Aws-CDK Modifying the Security Group generated by CfnMicrosoftAD - aws-cdk

CfnMicrosoftAD creates a security group - see https://docs.aws.amazon.com/directoryservice/latest/admin-guide/ms_ad_getting_started_what_gets_created.html I need to allow outbound UDP access on port 1812 to a server in the same VLAN (ie. add an outbound custom rule to the security group), but cannot work out how to do this using cdk. How can I reference the security group created?

You might be able to use an AwsCustomResource to make an api call to describeDirectories and pull out the securityGroupId from the vpcsettings and use it with a call to SecurityGroup.fromSecurityGroupId to then modify it.

Related

Configure role for Airflow Connection

Is there a way to restrict Airflow Connections so that they're only visible to the a particular role?
In particular I would like a solution so that a user for a particular role can:
Can only access those connections that are assigned to their role
Can only view those connections that are assigned to their role
I have looked at the following page and there's no instructions there on how to accomplish this:
https://airflow.apache.org/1.10.1/howto/manage-connections.html
You can add these restrictions through RBAC, but not to specific connections, it's all connections or none. To enable RBAC, you will need to either be version 1.10+ and set rbac = True under [webserver] as noted in https://github.com/apache/airflow/blob/master/UPDATING.md#new-webserver-ui-with-role-based-access-control. See the documentation for RBAC in https://airflow.apache.org/security.html#rbac-ui-security for more details on the feature.
The relevant permissions to you are Connections and ConnectionModelView. Then an extra step would be to use DAG level access to ensure certain users can't access DAGs that use certain connections (1.10.2+ only, see https://github.com/apache/airflow/blob/master/UPDATING.md#dag-level-access-control-for-new-rbac-ui).

Routing to same instance of Backend container that serviced initial request

We have a multiservice architecture consisting of HAProxy front end ( we can change this to another proxy if required), a mongodb database, and multiple instances of a backend app running under Docker Swarm.
Once an initial request is routed to an instance ( container ) of the backend app we would like all future requests from mobile clients to be routed to the same instance. The backend app uses TCP sockets to communicate with a VoIP PBX.
Ideally we would like to control the number of instances of the backend app using the replicas key in the docker-compose file. However if a container died and was recreated we would require mobile clients continue routing to the same container. The reason for this is each container is holding state info.
Is this possible with Docker swarm? We are thinking each instance of the backend app when created gets an identifier which is then used to do some sort of path based routing.
HAproxy has what you need. This article explains all.
As a conclusion of the article, you may choose from two solutions:
IP source affinity to server and Application layer persistence. The latter solution is stronger/better than the first but it requires cookies.
Here is an extras from the article:
IP source affinity to server
An easy way to maintain affinity between a user and a server is to use user’s IP address: this is called Source IP affinity.
There are a lot of issues doing that and I’m not going to detail them right now (TODO++: an other article to write).
The only thing you have to know is that source IP affinity is the latest method to use when you want to “stick” a user to a server.
Well, it’s true that it will solve our issue as long as the user use a single IP address or he never change his IP address during the session.
Application layer persistence
Since a web application server has to identify each users individually, to avoid serving content from a user to an other one, we may use this information, or at least try to reproduce the same behavior in the load-balancer to maintain persistence between a user and a server.
The information we’ll use is the Session Cookie, either set by the load-balancer itself or using one set up by the application server.
What is the difference between Persistence and Affinity
Affinity: this is when we use an information from a layer below the application layer to maintain a client request to a single server
Persistence: this is when we use Application layer information to stick a client to a single server
sticky session: a sticky session is a session maintained by persistence
The main advantage of the persistence over affinity is that it’s much more accurate, but sometimes, Persistence is not doable, so we must rely on affinity.
Using persistence, we mean that we’re 100% sure that a user will get redirected to a single server.
Using affinity, we mean that the user may be redirected to the same server…
Affinity configuration in HAProxy / Aloha load-balancer
The configuration below shows how to do affinity within HAProxy, based on client IP information:
frontend ft_web
bind 0.0.0.0:80
default_backend bk_web
backend bk_web
balance source
hash-type consistent # optional
server s1 192.168.10.11:80 check
server s2 192.168.10.21:80 check
Session cookie setup by the Load-Balancer
The configuration below shows how to configure HAProxy / Aloha load balancer to inject a cookie in the client browser:
frontend ft_web
bind 0.0.0.0:80
default_backend bk_web
backend bk_web
balance roundrobin
cookie SERVERID insert indirect nocache
server s1 192.168.10.11:80 check cookie s1
server s2 192.168.10.21:80 check cookie s2

Connect to Neo4j from a Node-Red flow and run a cypher query

I am trying to access Neo4j from a Node-Red flow.
I installed node-red-contrib-neo4j from the "Manage Palette" of Node-Red, from the browser interface (localhsot:1880)
However, i can't connect to the Neo4j database, since i get an HTTP error 401. In the neo4j node there are two fields : Name and URL. In addition,
I can't add text to Cypher Query field.
What values should i provide?
neo4j has authentication enabled by default (which is more secure), and when it is enabled your REST requests must specify an Authorization header. It does not look like the node-RED web user interface provides a way to specify headers.
If you want to keep authentication enabled, you may need to modify the node-RED source code to add support for user-specified headers.
Or, if a less secure setup is acceptable, you can disable neo4j authentication by setting the config property dbms.security.auth_enabled to false.

Emqtt - How to implement ACL for huge no. of clients

I am using Emqtt (emqtt.io) broker for my next application. The scene is -
I’ll have multiple clients(10,000s) and each of them will be publishing or subscribing to topics. But i want to restrict every client to publish and subscribe only on topics congaing there own client id - For ex-
Topics will be-
my_device/12345/update
my_device/99998/update
my_device/88888/update
If the middle attribute is the client ID, how can i restrict clients to do a pubs only on that particular topic and no one should be able to subscribe to
my_device/# and hence receiving all my messages.
I saw ACL plugin, saw this code ( {allow, {user, "dashboard"}, subscribe, ["$SYS/#"]}. ) but there i have to define every client manually ? and what if a new user is added, how will i add one more rule automatically ? because with my understanding, this file is loaded on starting up of the broker, right ?. I want to use ACL based on some database. Can You help me with that ?
The Emqtt user guide lists a set of plugins that can be used to store the ACL in a database:
http://emqtt.io/docs/v2/guide.html
The links in the that doc are broken, but the projects are hosted under the same git organisation
A. auth plugin
1. login
https://emqtt.io/docs/v2/guide.html#authentication
lot of ways to check login
http
redis/mysql...
2. acl
also can control acl access
http
redis/mysql..
but internal conf More efficient
B. acl internel
the magic var in topic pattern
%c - clientid
%u - username
the operation
subscribe
publish
pubsub
acl.conf example
allow clientid XXX sub clients/XXX:
{allow, all, subscribe, ["clients/%c"]}.
allow username XXX pub/sub clients/XXX:
{allow, all, pubsub, ["clients/%u"]}.
deny all other:
{deny, all}.
https://github.com/emqx/emqx/wiki/ACL-Design#examples
example from v4, but v2 also support %c %u
apply change
$ emqttd_ctl acl reload
NOTE: all cluster node should config
Best option is to use a plugin for auth/acl. I prefer mongodb plugin but there are other plugins provided.
From their docson github:MongoDB plugin setup for emqtt
It works great for authentication but I haven't yet been able to subscribe or publish using the plugin settings currently.
Also if the plugins are giving you problems with authentication, try building your emqtt from source

Membase? How does this work?

When I add an IP address and make connection, does the client gets All server's available IP addresses?
Or does client need to know at least 2 IP addresses for when one of them goes down?
This is the code I've been testing with (JAVA)
List addrList = new ArrayList();
addrList.add("192.168.20.105:11211");
addrList.add("192.168.20.106:11211");
addrList.add("192.168.20.101:11211");
try {
List addr = AddrUtil.getAddresses(addrList);
mbsClnt = new MemcachedClient (new BinaryConnectionFactory() , addr);
If I've added only one IP address, and while i was doing the gets and sets operation and the server goes down.
Will the client be able to connect to other available servers?
because if I add an observer and see the available servers, i dont see any (if i add only one server in the list)
Does this mean I have to add as many IP addresses as possible to avoid connection failures?
Another question is that , I can see that when i add the IP address, I have to put in PORT number which is linked to specific vBucket. Does it make any overflow from making all the clients watching a same vbucket? If so, how am I supposed to balance the Clients to watch different vBuckets?
Sorry if My English isn't really getting to you T^T.
Any kind of advices or answers will be very appreciated! Thanks!
The issue here is that your using the memcached constructors in MemcachedClient. If you are on 2.7.x or lower you want to use the constructor that takes a list of URI's, a bucket name, and a password. That constructor will connect to a Membase/Couchbase node and get a list of all servers in the cluster. Then if you rebalance or failover nodes Spymemcached will do the right thing and connect to new nodes or drop connections to nodes leaving the cluster.
In Spymemcached 2.8.x and later we actually removed this functionality and placed it into a new project called Couchbase Client. In that project you will find only the constructor I mentioned above. This should make it more obvious for what you should do. Couchbase Client 1.0.1 currently doesn't have support for views, but that will be added in the next release. Also Couchbase Client is compatible with all versions of Membase.
One other thing. You only need to provide one URI in order to get a list of all nodes in the cluster, but it is recommended that you add as many URI's as you have servers in the cluster. The reason for this is that if the node you specify in the URI goes down you will lose connection to the cluster since you won't be able to get cluster updates. If you specify more than one URI then Spymemcached/Couchbase Client will try to connect to the next node in the list of URI's.

Resources