multiple serverless process on different ports - serverless

I am having multiple serverless applications I am running it locally using serverless offline plugin
I am setting ports like
custom:
serverless-offline:
httpPort: 4000
another serverless
custom:
serverless-offline:
httpPort: 3000
At any time I am able to run only one service other shows :
Unexpected error while starting serverless-offline lambda server on
port 3002: { Error: listen EADDRINUSE: address already in use
But I am not using 3002 anywhere but it shows 3002
what is this error?
127.0.0.1:3002

If you go to the serverless docs, you can see that there are three different ports that serverless uses:
$ sls offline --help | grep " port "
--httpPort ......................... HTTP port to listen on. Default: 3000
--lambdaPort ....................... Lambda http port to listen on. Default: 3002
--websocketPort .................... Websocket port to listen on. Default: 3001
You have to specify all 3 of them if you want to run multiple serverless offline lambda servers. The first serverless config should thus look like:
custom:
serverless-offline:
httpPort: 4000
websocketPort: 4001
lambdaPort: 4002

Related

HAProxy config for TCP load balancing in docker container

I'm trying to put a HAProxy loadbalancer in front of my RabbitMQ cluster(which is setup with nodes in separate docker containers). I cannot find many examples for haproxy config for the above setup,
global
debug
defaults
log global
mode tcp
timeout connect 5000
timeout client 50000
timeout server 50000
frontend main
bind *:8089
default_backend app
backend app
balance roundrobin
mode http
server rabbit-1 172.18.0.2:8084
server rabbit-2 172.18.0.3:8085
server rabbit-3 172.18.0.4:8086
In this example, what should I give in the place of the ip addresses for docker containers?

Connection refused when attempting to connect to a docker container on an EC2

I'm currently running a spring boot application in a docker container on an EC2. My docker-compose file looks like this (with some values replaced):
version: '3.8'
services:
my-app:
image: ${ecr-repo}/my-app:0.0.1-SNAPSHOT
ports:
- "8893:8839/tcp"
networks:
default:
The docker container deploys and comes up as healthy with the healthcheck command being:
wget --spider -q -Y off http://localhost:8893/my-app/v1/actuator/health
If I do a docker ps -a I can see for the ports:
0.0.0.0:8893->8893
My Alb healthcheck however is returning a 502 so I've temporarily allowed connections from my IP directly to the EC2 in the security group. The rules are:
Allow Ingress on 8893 from my Alb security group
Allow Ingress on 8893 from my IP
Allow Egress to anywhere (0.0.0.0)
When I try and hit the healthcheck endpoint of my app using the public DNS of the ec2 on port 8893 using Postman I get Error: connect ECONNREFUSED
If I take my docker container down and then simulate a webserver using the command from https://fabianlee.org/2016/09/26/ubuntu-simulating-a-web-server-using-netcat/ which is:
while true; do { echo -e "HTTP/1.1 200 OK\r\n$(date)\r\n\r\n<h1>hello world from $(hostname) on $(date)</h1>" | nc -vl 8080; } done
I get a 200 response with the expected body which indicates it's not a problem with the security groups.
The actuator endpoint for spring boot is definitely enabled as if I try running the app through intellij and hitting the endpoint it returns a 200 and status up.
Any suggestions for what I might be missing here or how I could debug this further? It seems like docker isn't picking up connections to the port for some reason.

Deploying rails docker on ec2 and redirecting traffic from 80 to 4000

I have a dockerized rails app that I am trying to deploy to aws ec2. I managed to make it run in docker on ec2 and map port 4000, docker_compose.yml :
app:
image: davidgeismar/artifacts-app:latest
command: 'rails server -p 4000'
ports:
- "4000:4000"
volumes:
- ./log/production.log:/artifacts_data_api/log/production.log
On the aws dashboard in security groups I allowed http traffic from any source :
HTTP TCP 80 0.0.0.0/0
I wanted to open port 3000 but it is not possible on aws dashboard. From what I understand I am now supposed to redirect traffic from port 80 to port 3000. I followed those instructions to do that : https://serverfault.com/questions/320614/how-to-forward-port-80-to-another-port-on-the-samemachine
Now when I try to access my application server through my brower using ipv4:public_instance_ip/80 or ipv4:public_instance_ip:80I always get :
This site can’t be reached
Try:
Checking the connection
Checking the proxy and the firewall
ERR_CONNECTION_REFUSED
Could you provide guidance on how to achieve this ?

Ruby on Rails. How to run two servers (different apps) on Amazon Ec2 at same time?

I am triyn to deploy two different rails app in one Ec2 i can run one each time and work ok, but i need the 2 app running at same time and that can be accessed from anywhere not only localhost,I enable (add rule) two tcp port 3000 and 3001, this is my try:
/path/app1$ rails s -d
/path/app2$ rails s -b0.0.0.0 -p 3001 -d
this is the ps -ef command output
dev+ 3028 1 0 17:10 ? 00:00:00 puma 3.11.2 (tcp://localhost:3000) [/]
dev+ 3160 1 0 17:14 ? 00:00:00 puma 3.11.3 (tcp://0.0.0.0:3001) [/]
also try to run app1 with -b0.0.0.0 so it can listen from anywhere, but same result: only 1 app is listening on 0.0.0.0.
What I am missing? How can I run two servers at same time and listen both on 0.0.0.0.
thanks
By default, according to the Rails documentation, the server will only listen on the localhost / loopback interface. This is actually confirmed in the output snippet that you posted.
In the first command for app1, you aren't telling it to listen on 0.0.0.0, so you'd need to change your first command to:
/path/app1$ rails s -b0.0.0.0 -p 3000 -d
Both applications can listen on 0.0.0.0, but they can't share the same port. You have already configured app1 to listen on port 3000 (Rails default), and app2 to listen on port 3001, so they should both coexist peacefully, once you make the change above.
See also: What does binding a Rails Server to 0.0.0.0 buy you?

Putting a uWSGI fast router in front of uWSGI servers running in docker containers

I have multiple Flask based webapps running in docker containers (their processes need to be isolated from the host OS). To run these apps I use uWSGI servers inside the containers. Incoming requests should hit a uWSGI FastRouter with a subscription server (as described here: http://uwsgi-docs.readthedocs.org/en/latest/SubscriptionServer.html). When starting a container, the uWSGI should announce itself based on some internal configuration as a subdomain.
So the setup looks like this:
Request ---> FastRouter ----> container | myapp1 |
|
----> container | myapp2 |
I'm trying to test this on a single host running both the fast router as well as some docker containers.
The FastRouter is started using
uwsgi --fastrouter :1717 --fastrouter-subscription-server localhost:2626 --fastrouter-subscription-slot 1000
Question 1 Do I need to do anything else to get the subscription server running? Is this started together with the fastrouter process?
The containers have two ports mapped from the host to the container: 5000 (the webapp) and 2626 (to subscribe to the fast router).
So they're started like this:
docker run -d -p 5000:5000 -p 2626:2626 myImage $PATH_TO_START/start.sh
Where in start.sh the uWSGI is started as
uwsgi --http :5000 -M --subscribe-to 127.0.0.1:2626:/test --module server --callable env --enable-threads
The output looks good, it prints at the end:
spawned uWSGI master process (pid: 58)
spawned uWSGI worker 1 (pid: 73, cores: 1)
spawned uWSGI http 1 (pid: 74)
subscribing to 127.0.0.1:2626:/test
On the host I can do
curl localhost:5001
And I see the Webserver greeting me from inside the container. However, doing
curl localhost:1717/test
gets no response.
Question 2
Am I getting anything fundamentally wrong here? Should I test differently?
Question 3
How can I debug the FastRouter?
Edit:
Still struggling with this setup. I'm using a separate VPS now for the fastrouter. It is started using
uwsgi --uid fastrouter --master --fastrouter :80 --fastrouter-subscription-server :2626 --daemonize uwsgi.log --pidfile ./uwsgi.pid --chmod-socket --chown-socket fastrouter
WARNING: Think before copying above call for your project since it opens up the subscription service publicly - my plan is to secure it afterwards using the key signing facilities provided by uwsgi since the VPS doesn't have a private network available.
netstat -anp shows
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 843/uwsgi
udp 0 0 0.0.0.0:2626 0.0.0.0:* 843/uwsgi
unix 3 [ ] STREAM CONNECTED 9089 843/uwsgi
unix 3 [ ] STREAM CONNECTED 9090 843/uwsgi
unix 3 [ ] SEQPACKET CONNECTED 8764 843/uwsgi
unix 3 [ ] SEQPACKET CONNECTED 8763 843/uwsgi
Anyway, using uwsgi nodes with --http :5000 --module server --callable env --enable-threads --subscribe-to [Subscription-Server-IP-Address]2626:/test --socket-timeout 120 --listen 128 --buffer-size 32768 --honour-stdin still leads to the same result - uwsgi logs 'subscribing to', but http://[Subscription-Server-IP-Address]/test is not reachable. Is this kind of routing even possible? Every example I can find only assigns subdomains like [mysub].example.com, root domains, or root domains with some port number. This page includes a hint that the subscription server should be part of a routable address: http://projects.unbit.it/uwsgi/wiki/Example.
So I have a follow-up question:
Is the FastRouter even meant to let nodes announce new routes that haven't yet been set statically in a DNS zone file? I don't really care whether it's http://[key].example.com or http://example.com/[key], what's important is that these keys can be generated from inside a Docker container at setup time of the uwsgi server.
Generally the "dockered" apps run in a different network namespace, so loopback of a docker app is not the same of the fastrouter.
Use unix sockets for subscriptions, they are a great way for inter-namespace communication.
Your commands are good, the fastrouter is pretty verbose so if you do not see messages in its logs it is not working (and eventually you can strace it)

Resources