Travis CI, Static IP for the VM - travis-ci

I want to use travis ci for testing a library, but it requires access to an api, that has to explicitly allow IP address to access it.
Is there a way to configure travis to use only one external IP, either free or in the paid version?
What I need is to be sure that the external IP of the VM in travis is always one of a small set of predefined IPs.
I don't really want to configure proxies and such, although if push comes to shove, that's what I'll probably do.

There is no way to make the VMs use a single IP. We do have a subnet that we are currently using, but this can change at any time, so that option wouldn't be ideal either. Out of the options you mentioned, sounds like a proxy is the best solution.

Related

How to route all internet requests through a proxy in docker swarm

tldr; does docker swarm have a forceful and centered proxy setting that explicitly proxies all internet traffic in all services that is hosted in the cluster? Or any other tip of how to go about using a global proxy solution in a swarm cluster...?
Obs! this is not a question about a reversed proxy.
I have a docker swarm cluster (moving to Kubernatives as a solution is off-topic)
I have 3 managers and 3 workers, I label the workers accordingly to the expected containers they can host. The cluster only deploys docker swarm services, when I write "container" in this writing I'm referring to a docker swarm service container.
One of the workers is labelless, though active, and therefore does not host any containers to any service. If I would label the worker to allow it to host any container, then I will suffer issues in different firewalls that I don't always control, because the IP simply is not allowed.
This causes the problem for me that I can't do horizontal scaling, because when I add a new worker to the cluster, I also add a new IP that the requests can originate from. To update the many firewalls that would need to be updated because of a horizontal scaling is quite large, and simply not an option.
In my attempt to solve this on my own, I did what every desperate developer does and googled for a solution... and there is a simple and official documentation to be able to achieve this: https://docs.docker.com/network/proxy/
I followed the environment variables examples on that page. Doing so did however not really help, none of the traffic goes through the proxy I configured. After some digging, I noticed that this is due to nodejs (all services are written using nodejs), ignoring the proxy settings set by the environment. To solve that nodejs can use these proxy settings, I have to refactor a lot of components in a lot of services... a workload that is quite trumendus and possibly dangerous to perform given the different protocols and ports I use to connect to different infrastructural services outside the cluster...
I expect there to be a better solution for this, I expect there to be a built in functionality that forces all internet access from the containers to go through this proxy, a setting I don't have to make in the code, in my implementations. I expect there to be a wrapping solution that I can control in a central manner.
Now reading this again, I think maybe I should have tested the docker client configuration on the same page to see if it has the desired effect I'm requiring, but I assume they both would have the same outcome, being described on the same page with no noticeable difference written in the documentation.
My question is, is there a solution, that I just don't seem to be able to find, that wraps the proxy functionality around all the services? or is it a requirement to solve these issues in the implementation itself?
My thought is to maybe depend on an image, that in its turn depends on the nodejs image that I use today - that is responsible for this wrapping functionality, though still on an implantation level. Doing so would however still force the inheriting of a distributed solution of this kind - if I need to change the proxy configurations, then I need to change them everywhere, and redeploy everything... given a less complex solution without an in common data access layer.

Sending requests Google Kubernetes Engine, multiple deployments, under one external IP address

The Google Cloud Platform Kubernetes Engine based backend deployment I work on has between 4-60 nodes running at all times, spanning two different services.
I want to interface with an API that employs IP whitelisting however, which would mean that all outgoing requests would have to be funneled through one singular IP address.
How do I do this? The deployment uses an Nginx Ingress controller, which doesn't allow many options when it comes to the egress part of things.
I tried setting up a VM outside of the deployment, but still on GCP in the same region, and was unable to set up a forward proxy. At least, not one that I could connect to off my local device. Not sure if this was because of GCP's firewall or anything of that sort. This was using Squid, as well Apache, with no success in either.
I also looked at the Cloud NAT option, but it seems like I would have to recreate all the services, CI/CD pipelines, and DNS settings etc. I would ideally avoid that, as it would be a few days worth of work and would call for some downtime of the systems as well.
Ideally I would have a working forward proxy. I tried looking for Docker images that would function as one, but that does not seem to be a thing, sadly. SSHing into a VM to set up such a proxy hasn't led to success yet, either.
You have already found the solution, you have to rebuild things using either Cloud NAT or an equivalent solution made yourself. Even that is relatively recent and I've not actually tried it myself, as recently as a 6 months ago we were told this was not supported for GKE. Our solution was the proxy idea you mentioned, an HTTP proxy running outside of GKE and directing things through it at the app code level rather than infrastructure. It was not fun.

How to configure Prometheus in a multi-location scenario?

I love using Prometheus for monitoring and alerting. Until now, all my targets (nodes and containers) lived on the same network as the monitoring server.
But now I'm facing a scenario, where we will deploy our application stack (as a bunch of Docker containers) to several client machines in thier networks. Nearly all of the clients networks are behind a firewall or NAT. So scraping becomes quite difficult.
As we're still accountable for our stack, I'd like to have a central montioring server, altering and dashboards.
I was wondering what could be the best architecture if want to implement it with Prometheus, but I couldn't find any convincing approaches. My ideas so far:
Use a Pushgateway on our side and push all data out of the client networks. As the docs state, it's not intended that way: https://prometheus.io/docs/practices/pushing/
Use a federation setup (https://prometheus.io/docs/prometheus/latest/federation/): Place a Prometheus server in every client network behind a reverse proxy (to enable SSL and authentication) and aggregate relevant metricts there. Open/forward just a single port for federation scraping.
Other more experimental setups, such as SSH Tunneling (e.g. here https://miek.nl/2016/february/24/monitoring-with-ssh-and-prometheus/) or VPN!?
Thank you in advance for your help!
Nobody posted an answer so I will try to give my opinion on the second choice because that's what I think I would do in your situation.
The second setup seems the most flexible, you have access to the datas and only need to open one port on for the federating server, so it should still be secure.
One other bonus of this type of setup is that even if the firewall stop working for a reason or another, you will still have a prometheus scraping, you will have an alert because you won't be able to access the server(s) but when the connexion comes again you will have all the datas. You won't have a hole in the grafana dashboards because there was no datas, apart during the incident.
The issue with this setup is the fact that you need to maintain a number of server equivalent to the number of networks. A solution for this would be to have a packer image or maybe an ansible playbook to deploy.

How can I somewhat securely run distccd on a docker image in the cloud?

I'm compiling things on a raspberry pi and it's not going fast enough, even when I use my desktop's CPU to help.
I could just install distcc the old fashioned way on a cloud server, but what if someday I was to real quick spin up a bunch of servers for a minute with docker machine?
distccd can use SSH auth, but I don't see a good way to run both SSH and distccd. And it seems there will be hassle with managing ssh keys.
What if configured distcc to only accept the WAN IP of my house (and then turned the image off as soon as it was done)?
But it'd be great to make something other raspberry pi users could easily spin up.
You seem to already know the answer to this, set up distcc to use SSH. This will ensure encrypted communication between your distcc client and the distcc servers you have deployed as Docker images in the cloud. You have highlighted that the cost of doing this would be spending time to set up an SSH key that would be accepted by all of your Docker images. From memory this key could be the same for all the Docker nodes, as long as they all had the same user name using the same key. Is that really such a complex task?
You ask for a slightly less secure option for building your Compile Farm. Well limiting things based on the Internet accessible IP address for your house would limit the scope and increase the complexity of others using your build cluster. Someone might spoof the special IP address and get access to your distcc servers but that would just cost you their runtime. The larger concern would be that your code could be transmitted in plain text over the internet to these distcc servers. If that is not a big concern then it could be considered low risk.
An alternative might be to setup a secure remote network of docker nodes and set up VPN access to them. This would bind your local machine to the remote network and you could consider the whole thing to be a secured LAN. If it is considered safe to have the Docker nodes talk between themselves in an unencrypted manner within the cloud, it should be as secure to have a VPN link to them and do the same.
They best option might be to dig out a some old PCs and set those up as local distcc servers. Within a LAN their is no need for security.
You mention a wish to share this with other Raspberry PI users. There have been other Public Compile Farms in the past but many of them have fallen out of favour. Distributing such things publicly, as computational projects such as BOINC do, works poorly because the network latency and transfer rates can slow the builds significantly.

Linked Docker Containers with Mesos/Marathon

I'm having great success so far using Mesos, Marathon, and Docker to manage a fleet of servers, and the containers I'm placing on them. However, I'd now like to go a bit further and start doing things like automatically linking an haproxy container to each main docker service that starts, or provide other daemon based and containerized services that are linked and only available to the single parent container.
Normally, I'd start up the helper service first with some name, then when I started the real service, I'd link it to the helper and everything would be fine. How does this model fit in to Marathon and Mesos though? It seems for now at least that the containerization assumes a single container.
I had one idea to start the helper service first, on whatever host it could find, then add a constraint to the real service that the hostname = helper service's hostname, but that seems like it'd cause issues with resource offers and race conditions for those resources.
I've also thought to provide an "embed", or "deep-link" functionality to docker, or to the executor scripts that start the docker containers.
Before I head down any of these paths, I wanted to find out if someone else had solved this problem, or if I was just horribly over thinking things.
Thanks!
you're wandering in uncharted territory! ☺
There are multiple approaches here; and none of them is perfect, but the situation will improve in future versions of Docker, thanks to orchestration hooks.
One way is to use good old service discovery and registration. I.E., when a service starts, it will figure out its publicly available address, and register itself in e.g. Zookeeper, Etcd, or even Redis. Since it's not trivial for a service to figure out its publicly available address (unless you adopt some conventions, e.g. always mapping port X:X instead of letting Docker assing random ports), you might want to do the registration from outside. That means that your orchestration layer (Mesos in that case) would start the container, then figure out the host and port, and put that in your service discovery system. I'm not extremely familiar with Marathon, but you should be able to register a hook for that. Then, other containers will just look up the endpoint address in the service discovery registry, plain and simple.
You could also look at Skydock, which automatically registers DNS names for your containers with Skydns. However, it's currently single-host, so if you like that idea, you'll have to extend it somehow to support multiple hosts, and maybe SRV records.
Another approach is to use "well-known entry points". This is actually is simplified case of service discovery. It means that you will make sure that your services will always run on pre-set hosts and ports, so that you can use those addresses statically. Of course, this is bad (because it will make your life harder when you will want to reproduce the environment for testing/staging purposes), but if you have no clue at all about service discovery, well, it could be a start.
You could also use Pipework to create one (or multiple) virtual network spanning across multiple hosts, and binding your containers together. Pipework will let you assign IP addresses manually, or automatically through DHCP. This approach is not recommended, though, but it's a good fit if you also want to plug your containers into an existing network architecture (e.g. VLANs...).
No matter which solution you decide to use, I highly recommend to "pretend" that you're using links. I.e. instead of hard-coding your app configuration to connect to (random example) my-postgresql-db:5432, use environment variables DB_PORT_5432_TCP_ADDR and DB_PORT_5432_TCP_PORT (as if it were a link), and set those variables when starting the container. That way, if you "fold down" your containers into a simpler environment without service discovery etc., you can easily fallback on links without efforts.

Resources