Nest Js Microservices deployment on ECS with aws Copilot - docker

Hey everyone i am trying to deploy to production a basic nestjs microservice stack :
One application is a basic nestjs application that will be used as Api Gateway and will communicate to services with TCP transport
The second application is a nestjs microservice
//Gateway/src/main.ts
import { NestFactory } from '#nestjs/core';
import { AppModule } from './app.module';
async function bootstrap() {
const app = await NestFactory.create(AppModule);
await app.listen(8000);
}
bootstrap();
And the service
//Restaurant/src/main.ts
import { NestFactory } from '#nestjs/core';
import { AppModule } from './app.module';
import { Transport, TcpOptions } from '#nestjs/microservices';
async function bootstrap() {
const app = await NestFactory.createMicroservice(AppModule, {
transport: Transport.TCP,
options: {
host: '0.0.0.0',
port: 8001,
},
} as TcpOptions);
await app.listen();
}
bootstrap();
Then in my Gateway i am registering the microservice in the module like this
// Gateway/src/app.module.ts
#Module({
imports: [
ConfigModule.forRoot({
envFilePath: [`.env.stage.${process.env.STAGE}`],
}),
],
controllers: [RestaurantController, AppController],
providers: [
ConfigService,
{
provide: 'RESTAURANT_SERVICE',
useFactory: (configService: ConfigService) => {
return ClientProxyFactory.create({
options: {
host: '0.0.0.0',
port: 8001,
},
});
},
inject: [ConfigService],
},
],
})
export class AppModule {}
When i am starting each application in my local machine all workings perfect.
Now i used aws copilot to deploy my api-gw and my service into same copilot app
For the api-gw i choosed Load Balanced Web Service
For the service i choosed Backend Service
api-gw manifest file
name: api-gw
type: Load Balanced Web Service
http:
path: '/'
image:
build: Dockerfile
port: 8000
cpu: 256 # Number of CPU units for the task.
memory: 512 # Amount of memory in MiB used by the task.
platform: linux/x86_64 # See https://aws.github.io/copilot-cli/docs/manifest/lb-web-service/#platform
count: 1 # Number of tasks that should be running in your service.
exec: true # Enable running commands in your container.
network:
connect: true # Enable Service Connect for intra-environment traffic between services.
restaurant service manifest file
name: restaurant
type: Backend Service
image:
build: Dockerfile
port: 8001
cpu: 256 # Number of CPU units for the task.
memory: 512 # Amount of memory in MiB used by the task.
platform: linux/x86_64 # See https://aws.github.io/copilot-cli/docs/manifest/backend-service/#platform
count: 1 # Number of tasks that should be running in your service.
exec: true # Enable running commands in your container.
network:
connect: true # Enable Service Connect for intra-environment traffic between services.
The deployment of both of the services is working fine , but when i am sending request to the api-gw , the api-gw trying to connect to the restaurant service and i am getting error
Error: connect ECONNREFUSED 0.0.0.0:8001
Like you see i enable the network true property on both of the services in the manifest files
Thank for your help

The network field enables AWS Service Connect.
To use TCP, please use a NLB (the default for Load-Balanced Web Services is an ALB). See this Copilot docs page for instructions on specifying a NLB.

Related

Axios GET call times out, however the same request with CURL works

Context:
The app is currently running on a docker container.
There are three containers in total, all of them attached to the same network.
-MariaDB
-Flask app
-Vue app (node-16-buster)
When trying to call an api from my flask backend I get this error from axios:
However, when I copy the url and just try curl (from the vue container's terminal) it works like a charm.
No such problems were observed when I ran everything on my local machine.
This is the app's vite.config.js file.
import { fileURLToPath, URL } from 'node:url'
import { defineConfig } from 'vite'
import vue from '#vitejs/plugin-vue'
// https://vitejs.dev/config/
export default defineConfig({
plugins: [vue()],
resolve: {
alias: {
'#': fileURLToPath(new URL('./src', import.meta.url))
}
},
server:{
port: 80,
host: "0.0.0.0"
}
})
I tried fiddling with different docker network configurations, but those yielded no results.
Yes, this indeed turns out to be correct. Since the code runs on the browser I just had to change the URL to be correct.
I.e. pointing to the host where the docker containers run, rather than pointing to the docker container iteself.

Serverless Lambda: Socket hangout Docker Compose

We have a lambda that has a post event. Before deploying we need to test the whole flow on local so we're using serverless-offline. We have our main app inside a docker-compose and we are trying to test calling this lambda from the main app. We're getting this error: Error: socket hang up
I first thought that could be a docker configuration on the dockerfile or docker-compose.yml but we tested with an express app using the same dockerfile from the lambda and I can hit the endpoint from the main app. So know we know that is not docker issue but rather the serverless.yml
service: csv-report-generator
custom:
serverless-offline:
useDocker: true
dockerHost: host.docker.internal
httpPort: 9000
lambdaPort: 9000
provider:
name: aws
runtime: nodejs14.x
stage: local
functions:
payments:
handler: index.handler
events:
- http:
path: /my-endpoint
method: post
cors:
origin: '*'
plugins:
- serverless-offline
- serverless-dotenv-plugin
Here is our current configuration, we've been trying different configurations without success. Any idea how to be able to hit the lambda?

How to connect remote docker instance using Pulumi?

I have created VM instance in GCP using pulumi and installed docker. And I am trying to connect remote instance of the docker but its getting failed to due to connection establishment (asking for a key verification in a pop up window).
const remoteInstance = new docker.Provider(
"remote",
{
host: interpolate`ssh://user#${externalIP}:22`,
},
{ dependsOn: dockerInstallation }
);
I can able to run docker containers locally. But want to run the same in VM. The code snippet is here
with the recent version of "#pulumi/docker": "^3.2.0" you can now pass the ssh options. Reference
const remoteInstance = new docker.Provider(
"remote",
{
host: interpolate`ssh://user#${externalIP}:22`,
sshOpts: [
"-o",
"StrictHostKeyChecking=no",
"-o",
"UserKnownHostsFile=/dev/null",
],
},
{ dependsOn: dockerInstallation }
);

How to pass environment variable to Dockerfile through Pulumi?

I have made a simple flask app to practice Pulumi.
It gets env variable set through Dockerfile and I intend to host it on AWS Fargate, and RDS Postgres as database.
Here is the Flask app:
import os
from flask import Flask, request
from flask_sqlalchemy import SQLAlchemy
from flask_migrate import Migrate
app = Flask(__name__)
app.config["SQLALCHEMY_DATABASE_URI"] = "postgresql://{}".format(
os.environ.get("DATABASE_URL")
)
app.config["SQLALCHEMY_TRACK_MODIFICATIONS"] = False
db = SQLAlchemy(app)
migrate = Migrate(app, db)
class CarsModel(db.Model):
__tablename__ = "cars"
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String())
model = db.Column(db.String())
doors = db.Column(db.Integer())
def __init__(self, name, model, doors):
self.name = name
self.model = model
self.doors = doors
def __repr__(self):
return f"<Car {self.name}>"
#app.route("/")
def hello():
return {"hello": "world"}
#app.route("/cars", methods=["POST", "GET"])
def handle_cars():
if request.method == "POST":
if request.is_json:
data = request.get_json()
new_car = CarsModel(
name=data["name"], model=data["model"], doors=data["doors"]
)
db.session.add(new_car)
db.session.commit()
return {"message": f"car {new_car.name} has been created successfully."}
else:
return {"error": "The request payload is not in JSON format"}
elif request.method == "GET":
cars = CarsModel.query.all()
results = [
{"name": car.name, "model": car.model, "doors": car.doors} for car in cars
]
return {"count": len(results), "cars": results, "message": "success"}
#app.route("/cars/<car_id>", methods=["GET", "PUT", "DELETE"])
def handle_car(car_id):
car = CarsModel.query.get_or_404(car_id)
if request.method == "GET":
response = {"name": car.name, "model": car.model, "doors": car.doors}
return {"message": "success", "car": response}
elif request.method == "PUT":
data = request.get_json()
car.name = data["name"]
car.model = data["model"]
car.doors = data["doors"]
db.session.add(car)
db.session.commit()
return {"message": f"car {car.name} successfully updated"}
elif request.method == "DELETE":
db.session.delete(car)
db.session.commit()
return {"message": f"Car {car.name} successfully deleted."}
if __name__ == "__main__":
app.run(host="0.0.0.0", port=8000)
Here is the Dockerfile:
# Use an official Python runtime as a parent image
FROM python:3.8
# Set the working directory to /app
WORKDIR /app
# Copy the current directory contents into the container at /app
COPY . /app
# Install any needed packages specified in requirements.txt
RUN pip install --trusted-host pypi.python.org -r requirements.txt
ENV FLASK_APP main.py
ENV DATABASE_URL localhost
RUN flask db init
RUN flask db migrate
RUN flask db upgrade
# Make port 80 available to the world outside this container
EXPOSE 8000
# Run app.py when the container launches
CMD ["python", "main.py"]
Here is the index.ts file for Pulumi:
import * as awsx from "#pulumi/awsx";
import * as aws from "#pulumi/aws";
import * as pulumi from "#pulumi/pulumi";
const vpc = new awsx.ec2.Vpc("custom");
// Step 1: Create an ECS Fargate cluster.
const cluster = new awsx.ecs.Cluster("first_cluster", { vpc });
const securityGroupIds = cluster.securityGroups.map(g => g.id);
const dbSubnets = new aws.rds.SubnetGroup("dbsubnets", {
subnetIds: vpc.publicSubnetIds,
});
const db = new aws.rds.Instance("postgresdb", {
engine: "postgres",
instanceClass: "db.t2.micro",
allocatedStorage: 20,
dbSubnetGroupName: dbSubnets.id,
vpcSecurityGroupIds: securityGroupIds,
name: "dummy",
username: "dummy",
password: "123456789",
publiclyAccessible: true,
skipFinalSnapshot: true,
});
const hosts = pulumi.all([db.endpoint.apply(e => e)]);
const environment = hosts.apply(([postgresHost]) => [
{ name: "DATABASE_URL", value: postgresHost },
]);
// Step 2: Define the Networking for our service.
const alb = new awsx.elasticloadbalancingv2.ApplicationLoadBalancer(
"net-lb", { external: true, securityGroups: cluster.securityGroups, vpc });
const atg = alb.createTargetGroup(
"app-tg", { port: 8000, deregistrationDelay: 0 });
const web = atg.createListener("web", { port: 80, external: true });
// Step 3: Build and publish a Docker image to a private ECR registry.
const img = awsx.ecs.Image.fromPath("app-img", "./app");
// Step 4: Create a Fargate service task that can scale out.
const appService = new awsx.ecs.FargateService("app-svc", {
cluster,
taskDefinitionArgs: {
container: {
image: img,
cpu: 102 /*10% of 1024*/,
memory: 50 /*MB*/,
portMappings: [web],
environment: environment,
},
},
desiredCount: 5,
}, { dependsOn: [db] });
// Step 5: Export the Internet address for the service.
export const url = web.endpoint.hostname;
Now, when I do pulumi up, I get this:
sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) could not connect to server: Connection refused
Is the server running on host "localhost" (127.0.0.1) and accepting
TCP/IP connections on port 5432?
could not connect to server: Cannot assign requested address
Is the server running on host "localhost" (::1) and accepting
TCP/IP connections on port 5432?
(Background on this error at: http://sqlalche.me/e/e3q8)
at /Users/myuser/projects/practice/pulumi/simple_flask_app/node_modules/#pulumi/docker.ts:546:15
at Generator.next (<anonymous>)
at fulfilled (/Users/myuser/projects/practice/pulumi/simple_flask_app/node_modules/#pulumi/docker/docker.js:18:58)
at processTicksAndRejections (internal/process/task_queues.js:97:5)
error: The command '/bin/sh -c flask db migrate' returned a non-zero code: 1
Now, I know that its because its trying to connect to localhost as that is the default, but how to pass in the host name of db resource?
Thanks
UPDATE 1: Tried removing ENV DATABASE_URL localhost
After removing ENV DATABASE_URL localhost:
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/pool/base.py", line 652, in __connect
connection = pool._invoke_creator(self)
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/strategies.py", line 114, in connect
return dialect.connect(*cargs, **cparams)
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/default.py", line 490, in connect
return self.dbapi.connect(*cargs, **cparams)
File "/usr/local/lib/python3.8/site-packages/psycopg2/__init__.py", line 127, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) could not translate host name "None" to address: Name or service not known
(Background on this error at: http://sqlalche.me/e/e3q8)
at /Users/myuser/projects/practice/pulumi/simple_flask_app/node_modules/#pulumi/docker.ts:546:15
at Generator.next (<anonymous>)
at fulfilled (/Users/myuser/projects/practice/pulumi/simple_flask_app/node_modules/#pulumi/docker/docker.js:18:58)
I’d consider this bad practice to run the migrations during the docker build. What happens if the build fails afterwards? How can you control which changes are applied to which environment? I think there are better solutions to this problem.
Those migrations could also be applied when the container boots up in fargate by e.g. putting those commands into an entrypoint script or executing the migration in the process startup (basically in your main.py) like described here: https://flask-migrate.readthedocs.io/en/latest/#command-reference
Another reason for not doing it during the pulumi up is that this would also require a firewall rule allowing your local machine to access the database (might be already “solved“ with your publiclyAccessible setting though).
If you still want to keep this action in the build, you need a different way of providing the database url to step 3. The env is only used during step 4 (setting up fargate). For step 3 you could leverage build args (https://docs.docker.com/engine/reference/builder/#arg) and pass them via pulumi like so https://www.pulumi.com/docs/reference/pkg/docker/image/#dockerbuild
Keep in mind that this adds some security issues because you open up the database to the public which wouldn’t be necessary otherwise. So I’d definitely go with a different approach as described above.

How to connect socket.io inside docker-compose between containers

I have one container that is serving http on port 4000.
it has socket server attached
docker-compose:
dashboard-server:
image: enginetonic:compose1.2
container_name: dashboard-server
command: node src/service/endpoint/dashboard/dashboard-server/dashboard-server.js
restart: on-failure
ports:
- 4000:4000
integration-test:
image: enginetonic:compose1.2
container_name: integration-test
testRegex "(/integration/.*|(\\.|/)(integration))\\.jsx?$$"
tty: true
server:
const http = require('http').createServer(handler)
const io = Io(http)
io.on('connection', socket => {
logger.debug('socket connected')
})
io.use((socket, next) => {
logger.debug('socket connection established.')
})
http.listen(4000, '127.0.0.1', () => {
console.log(
`Server running at http://127.0.0.1:4000/`
)
output in docker:
Server running at http://127.0.0.1:4000/
https is listening: true
Now, I am trying to connect to this server from another container like this:
file:
const url = `ws://dashboard-server:4000`
const ioc = IoC.connect(url)
ioc.on('error', error => {
console.log(error.message)
})
ioc.on('connect', res => {
console.log('connect')
})
ioc.on('connect_error', (error) => {
console.log(error.message)
})
output:
xhr poll error
When I run both locally in terminal, I get correct response
{"message":"socket connection established","level":"debug"}
Why isnt socket making connection inside container, but locally it is?
What am I doing wrong?
edit: only part of files are displayed for readability. socket connects normaly on local machine with with spawning both files in separate terminals
You need to link the docker containers and refer to them by name, not 127.0.0.1. https://docs.docker.com/compose/networking provides more doc. You'll also need to listen to '0.0.0.0' so that you accept connections across the docker network.
I only see one container in your compose file. If you're trying to connect to the docker containers from outside docker, you'll have to expose a port. The same reference shows you how.
http.listen(4000, '127.0.0.1', () => {
should become
http.listen(4000, '0.0.0.0', () => {
so that the server is listening on all addresses, including the address that docker is automatically allocating on a docker network.
Then the client has to refer to the server by the name given in docker compose, so
const url = `ws://127.0.0.1:4000`
becomes
const url = `ws://dashboard-server:4000`

Resources