How to avoid anti-spoofing measures in OpenStack? - network-programming

I have created a virtual private network across two clusters in different regions. While the masters are able to communicate, I can't reach a worker across the virtual private network. I assume this is due to anti spoofing measures.
How I tried to fix it
I read (https://www.packetcoders.io/openstack-neutron-port-security-explained/ and https://www.redhat.com/en/blog/whats-coming-openstack-networking-kilo-release) that one needs to add allowed access pairs to all instances so that those allowed access pairs can access the worker. However I am unsure what IP needs to be allowed. I added the master's local ip in the subnet, its floating_ip and its ip in the virtual private network. None of these fixed my issue. I am still unable to reach the workers.
What I am looking for
A complete answer would be great, but ways to debug this further would be highly appreciated, too.
More info to avoid xy problem
I have started two clusters in different regions and have connected the masters using wireguard. They can already ping each other using subnet addresses. That's why I think that my problem is not wireguard related. If you think otherwise, I am happy to give additional info on this setup to avoid asking the wrong question.

Related

overriding configuration on a running tarantool instance

Can anyone tell me in the course, it is possible to override the parameters of individual box.cfg on a running instance. For example, add a replica, for several days I have been trying to deploy three replicas on three hosts via the docker service stack.
When I raise my hands on each server, everything works, through deploy they do not see each other and fall. I've tried all sorts of ways. hung up the endpoint on the target nodes, when requested, it gives the ip of the machine on which the container rises, if the ip matches one of those indicated in SEED, then substitutes the internal ip of the container instead (otherwise it cannot connect to itself).
In theory, it all works as I described, but there are suspicions that everything is not much different, I suppose that the problem is that before the declaration of box.cfg the instance does not reserve the address. Alas, I can not go inside the container because it cannot rise. I got the idea that if all three nodes are declared at the minimum settings and as soon as they rise to listen to the subnet, as soon as the node finds another, it will write it to replication and override box.cfg. Correct me please who had experience.
Some of the box.cfg parameters are dynamic. For example, the box.cfg{listen=}. You can set this one from the Lua code as you wish. In your case, if the container gets its IP address later, you need to specify only the port in listen. This way, Tarantool will listen on all possible interfaces.
The replication_source is a bit trickier. You can set it dynamically, but your first (initializing) call to box.cfg should be with the replication_source. This is because all instances that are initialized without this parameter will create their own replicaset, and it will make it impossible to join them to another replicaset.
You can read more about Tarantool replication architecture here: https://www.tarantool.io/en/doc/latest/book/replication/repl_architecture/

Need help setting up a dev/test Corda Network with docker

I want to set up an environment where I have several VMs, representing several partners, and where each VM host one or more nodes. Ideally, I would use kubernetes to bring up/down my environment. I have understood from the docs that this has to be done as a Dev-network, not as my own compatibility zone or anything.
However, the steps to follow are not clear (to me). I have used Dockerform or the docker image provided, but this does not seem to be the way for what i need to do.
My current (it changes with the hours) understanding is that:
a) I should create a network between the vms that will be hosting nodes. To do so, i understand i should use Cordite or the Bootstrap jar. Cordite documentation seems clearer that the Corda docs, but i haven't been able to try it yet. Should one or the other be my first step? Can anyone shed some light on how?
b) Once I have my network created I need a certifying entity (Thanks #Chris_Chabot for pointing it out!)
c) The next step should be running deployNodes so I create the config files. Here, I am not sure of whether I can indicate in deployNodes at which IPs? should the nodes be created or I just need to create the dockerfiles and certificate folders and so on, and distribute across the VMs them accordingly. I am not sure either about how to point out to the Network service.
Personally, I guess that I will not use the Dockerfiles if I am going to use Kubernetes and that I only need to distribute the certificates and config files to all the slave VMs so they are available to the nodes when they are to be launched.
To be clear, and honest :D, this is even before including any cordapp in the containers, I am just trying to have the environment ready. Basically, starting a process that builds the nodes, distribute the config files among the slave vms, and runs the dockers with the nodes. As explained in a comment, the goal here is not testing Cordapps, is testing how to deploy an operative distributed dev environment.
ANY help is going to be ABSOLUTELY welcome.
Thanks!
(Developer Relations # R3 here)
A network of Corda nodes needs three things:
- A notary node, or a pool of multiple notary nodes
- A certification manager
- A network map service
The certification manager is the root of the trust in the network, and, well, manages certificates. These need to be distributed to the nodes to declare and prove their identity.
The nodes connect to the network map service, which checks their certificate to see if they have access to the network, and if so, add them to the list of nodes that it manages -- and distributes this list of node identities + ip addresses to all the nodes on that network.
Finally the nodes use the notaries to sign the transactions that happen on the network.
Generally we find that most people start developing on the https://testnet.corda.network/ network, and later deploy to the production corda.network.
One benefit of that is that this already comes with all these pieces (certification manager, network map, and a geographically distributed pool of notaries). The other benefit is that it guarantees that you have interoperability with other parties in the future, as everyone uses the same root certificate authority -- With your own network other 3rd parties couldn't just connect as they'd be on a different cert chain and couldn't be validated.
If however you have a strong reason to want to build your own network, you can use Cordite to provide the network map and certman services.
In that case step 1 is to go through the setup and configuration instructions on https://gitlab.com/cordite/network-map-service
Once that is fully setup and up and running, https://docs.corda.net/permissioning.html has some more information on how the certificates are setup, and the "Joining an existing Compatibility Zone" section in https://docs.corda.net/docker-image.html has instructions on how to get a Corda docker image / node to join that network by specifying which network map / certman url's to use.
Oh and on the IP network question: The network manager stores a combination of the X509 identity and the IP address for each node which it distributes to the network -- this means that every node, including the notaries, certman, network map and all nodes need to be able to connect to that IP address -- either by all being on the same network that you created, or by having public ip addresses

Why don't use host network in docker since docker and kubernetes network is so complex

Using docker can simplify CI/CD but also introduce the complexity, not everybody able to hold the docker network though selecting open source solutions like Flannel, Calico.
So why don't use host network in docker, or what lost if use host network in docker.
I know the port conflict is one point, any others?
There are two parts to an answer to your question:
Pods must have individual, cluster-routable, IP addresses and one should be very cautious about recycling them
You can, if you wish, not use any software defined network (SDN)
So with the first part, it is usually a huge hassle to provision a big enough CIDR to house the address range required for supporting every Pod that is running across every Namespace, and have the space be big enough to avoid recycling addresses for a very long time. Thus, having an SDN allows using "fake" addresses that one need not bother the "real" network with knowing about. No routers need to be updated, no firewalls, no DHCP, whatever.
That said, as with the second part, you don't have to use an SDN: that's exactly what the container network interface (CNI) is designed to paper over. You can use the CNI provider that makes you the happiest, including using static IP addresses or the outer network's DHCP server.
But your comment about port collisions is pretty high up the list of reasons one wouldn't just want to hostNetwork: true and be done with it; I'm actually not certain if the default kubernetes scheduler is aware of hostNetwork: true and the declared ports: on the containers: in order to avoid co-scheduling two containers that would conflict. I guess try it and see, or, better yet, don't try it -- use CNI so the next poor person who tries to interact with your cluster doesn't find a snowflake setup.

On-prem docker swarm deployment with HA

I’m doing on-prem deployments using docker swarm and I need application and DB high availability.
As far as application HA is concerned, it works great within docker (service discovery and load balancing), but I’m not sure how to use it on my network. I mean how can I assign a virtual IP to all of my docker managers so that if any of them goes down, that virtual IP automatically points to the other docker manager in the cluster. I don’t want to have a single point of failure in my architecture, that’s why I’m not inclined to use any (single) reverse proxy solution in front of my swarm cluster (because to my understanding, if nginx/HAProxy goes down, the whole system goes into abyss. I would love to know that I’m wrong).
Secondly, I use WebSockets in my application for push notifications which doesn’t behave normally with all the load balancing stuff because socket handshakes get distorted.
I want a solution to these problems without writing anything in code (HA-specific and non-generic like hard coding IPs etc). Any suggestions? I hope I explained my problem correctly.
Docker Flow Proxy or Traefik can be placed on a set of swarm nodes that you want to receive traffic for incoming connections, and use DNS routing to get packets to the correct containers. Both have sticky sessions option (I know Docker Flow does, not sure about Traefik).
Then you can either:
If your incoming connections are just client HTTP/S requests, you can use DNS Round Robin with multiple A records, which works great, or
By an expensive hardware fault tolerant reverse proxy like F5
Use some network-layer IP failover that is at the OS and physical network level (not related to Docker really), but I'm not sure how well that would work with Swarm.
Number 2 is the typical solution in private datacenters that need full HA at all layers.

Is there a way to disconnect or sandbox an instance network interface

I am looking for how I can take an existing instance and either change its network "connection" to a sandboxed network (which is easy enough to create since each project supports up to 5 networks) or start the instance with no network interface at all and just use console access. Alternatively, what is the recommended process for doing forensic investigation into an instance that is suspected to be running processes or services that should not be communicating with other instances in the project or any external address? Thanks in advance.
You can leave instances without a public IP address. Instances created this way will not accessible by machines outside your project.
Have a look at the documentation concerning IPs.
You may also need to set up a NAT gateway so instances can communicate with ouside machines.
You can use forwarding rules to discard packets from/to an instance in combination with routing.

Resources