I have made a simple flask app to practice Pulumi.
It gets env variable set through Dockerfile and I intend to host it on AWS Fargate, and RDS Postgres as database.
Here is the Flask app:
import os
from flask import Flask, request
from flask_sqlalchemy import SQLAlchemy
from flask_migrate import Migrate
app = Flask(__name__)
app.config["SQLALCHEMY_DATABASE_URI"] = "postgresql://{}".format(
os.environ.get("DATABASE_URL")
)
app.config["SQLALCHEMY_TRACK_MODIFICATIONS"] = False
db = SQLAlchemy(app)
migrate = Migrate(app, db)
class CarsModel(db.Model):
__tablename__ = "cars"
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String())
model = db.Column(db.String())
doors = db.Column(db.Integer())
def __init__(self, name, model, doors):
self.name = name
self.model = model
self.doors = doors
def __repr__(self):
return f"<Car {self.name}>"
#app.route("/")
def hello():
return {"hello": "world"}
#app.route("/cars", methods=["POST", "GET"])
def handle_cars():
if request.method == "POST":
if request.is_json:
data = request.get_json()
new_car = CarsModel(
name=data["name"], model=data["model"], doors=data["doors"]
)
db.session.add(new_car)
db.session.commit()
return {"message": f"car {new_car.name} has been created successfully."}
else:
return {"error": "The request payload is not in JSON format"}
elif request.method == "GET":
cars = CarsModel.query.all()
results = [
{"name": car.name, "model": car.model, "doors": car.doors} for car in cars
]
return {"count": len(results), "cars": results, "message": "success"}
#app.route("/cars/<car_id>", methods=["GET", "PUT", "DELETE"])
def handle_car(car_id):
car = CarsModel.query.get_or_404(car_id)
if request.method == "GET":
response = {"name": car.name, "model": car.model, "doors": car.doors}
return {"message": "success", "car": response}
elif request.method == "PUT":
data = request.get_json()
car.name = data["name"]
car.model = data["model"]
car.doors = data["doors"]
db.session.add(car)
db.session.commit()
return {"message": f"car {car.name} successfully updated"}
elif request.method == "DELETE":
db.session.delete(car)
db.session.commit()
return {"message": f"Car {car.name} successfully deleted."}
if __name__ == "__main__":
app.run(host="0.0.0.0", port=8000)
Here is the Dockerfile:
# Use an official Python runtime as a parent image
FROM python:3.8
# Set the working directory to /app
WORKDIR /app
# Copy the current directory contents into the container at /app
COPY . /app
# Install any needed packages specified in requirements.txt
RUN pip install --trusted-host pypi.python.org -r requirements.txt
ENV FLASK_APP main.py
ENV DATABASE_URL localhost
RUN flask db init
RUN flask db migrate
RUN flask db upgrade
# Make port 80 available to the world outside this container
EXPOSE 8000
# Run app.py when the container launches
CMD ["python", "main.py"]
Here is the index.ts file for Pulumi:
import * as awsx from "#pulumi/awsx";
import * as aws from "#pulumi/aws";
import * as pulumi from "#pulumi/pulumi";
const vpc = new awsx.ec2.Vpc("custom");
// Step 1: Create an ECS Fargate cluster.
const cluster = new awsx.ecs.Cluster("first_cluster", { vpc });
const securityGroupIds = cluster.securityGroups.map(g => g.id);
const dbSubnets = new aws.rds.SubnetGroup("dbsubnets", {
subnetIds: vpc.publicSubnetIds,
});
const db = new aws.rds.Instance("postgresdb", {
engine: "postgres",
instanceClass: "db.t2.micro",
allocatedStorage: 20,
dbSubnetGroupName: dbSubnets.id,
vpcSecurityGroupIds: securityGroupIds,
name: "dummy",
username: "dummy",
password: "123456789",
publiclyAccessible: true,
skipFinalSnapshot: true,
});
const hosts = pulumi.all([db.endpoint.apply(e => e)]);
const environment = hosts.apply(([postgresHost]) => [
{ name: "DATABASE_URL", value: postgresHost },
]);
// Step 2: Define the Networking for our service.
const alb = new awsx.elasticloadbalancingv2.ApplicationLoadBalancer(
"net-lb", { external: true, securityGroups: cluster.securityGroups, vpc });
const atg = alb.createTargetGroup(
"app-tg", { port: 8000, deregistrationDelay: 0 });
const web = atg.createListener("web", { port: 80, external: true });
// Step 3: Build and publish a Docker image to a private ECR registry.
const img = awsx.ecs.Image.fromPath("app-img", "./app");
// Step 4: Create a Fargate service task that can scale out.
const appService = new awsx.ecs.FargateService("app-svc", {
cluster,
taskDefinitionArgs: {
container: {
image: img,
cpu: 102 /*10% of 1024*/,
memory: 50 /*MB*/,
portMappings: [web],
environment: environment,
},
},
desiredCount: 5,
}, { dependsOn: [db] });
// Step 5: Export the Internet address for the service.
export const url = web.endpoint.hostname;
Now, when I do pulumi up, I get this:
sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) could not connect to server: Connection refused
Is the server running on host "localhost" (127.0.0.1) and accepting
TCP/IP connections on port 5432?
could not connect to server: Cannot assign requested address
Is the server running on host "localhost" (::1) and accepting
TCP/IP connections on port 5432?
(Background on this error at: http://sqlalche.me/e/e3q8)
at /Users/myuser/projects/practice/pulumi/simple_flask_app/node_modules/#pulumi/docker.ts:546:15
at Generator.next (<anonymous>)
at fulfilled (/Users/myuser/projects/practice/pulumi/simple_flask_app/node_modules/#pulumi/docker/docker.js:18:58)
at processTicksAndRejections (internal/process/task_queues.js:97:5)
error: The command '/bin/sh -c flask db migrate' returned a non-zero code: 1
Now, I know that its because its trying to connect to localhost as that is the default, but how to pass in the host name of db resource?
Thanks
UPDATE 1: Tried removing ENV DATABASE_URL localhost
After removing ENV DATABASE_URL localhost:
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/pool/base.py", line 652, in __connect
connection = pool._invoke_creator(self)
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/strategies.py", line 114, in connect
return dialect.connect(*cargs, **cparams)
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/default.py", line 490, in connect
return self.dbapi.connect(*cargs, **cparams)
File "/usr/local/lib/python3.8/site-packages/psycopg2/__init__.py", line 127, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) could not translate host name "None" to address: Name or service not known
(Background on this error at: http://sqlalche.me/e/e3q8)
at /Users/myuser/projects/practice/pulumi/simple_flask_app/node_modules/#pulumi/docker.ts:546:15
at Generator.next (<anonymous>)
at fulfilled (/Users/myuser/projects/practice/pulumi/simple_flask_app/node_modules/#pulumi/docker/docker.js:18:58)
I’d consider this bad practice to run the migrations during the docker build. What happens if the build fails afterwards? How can you control which changes are applied to which environment? I think there are better solutions to this problem.
Those migrations could also be applied when the container boots up in fargate by e.g. putting those commands into an entrypoint script or executing the migration in the process startup (basically in your main.py) like described here: https://flask-migrate.readthedocs.io/en/latest/#command-reference
Another reason for not doing it during the pulumi up is that this would also require a firewall rule allowing your local machine to access the database (might be already “solved“ with your publiclyAccessible setting though).
If you still want to keep this action in the build, you need a different way of providing the database url to step 3. The env is only used during step 4 (setting up fargate). For step 3 you could leverage build args (https://docs.docker.com/engine/reference/builder/#arg) and pass them via pulumi like so https://www.pulumi.com/docs/reference/pkg/docker/image/#dockerbuild
Keep in mind that this adds some security issues because you open up the database to the public which wouldn’t be necessary otherwise. So I’d definitely go with a different approach as described above.
Related
I've been attempting to create a managed instance group on GCP which consists of instances that host a custom docker image. However I'm struggling to figure out how to do this with Pulumi.
Reading Google's GCP documentation it's possible to deploy instances that host a docker container within a managed instance group via instance templates.
Practically with gcloud this looks like:
gcloud compute instance-templates create-with-container TEMPLATE_NAME --container-image DOCKER_IMAGE
Reading Pulumi's instance template documentation however, it's not clear how to create an instance template which would do the same thing as the command above.
Is it possible in Pulumi to create a managed instance group where the instances host a custom docker image, or will I have to do something like create an instance template manually, and refer to that within my Pulumi script?
Here's a hybrid approach that utilises both gcloud and Pulumi.
At a high level:
Create a docker container and upload to the Google Container Registry
Create an instance template using gcloud
Create a managed instance group, referencing the instance template from within the Pulumi script
#1 Creating the Docker Container
Use CloudBuild to detect changes within a Git repo, create a docker container, and upload it to the Google Container Registry.
Within my repo I have a Dockerfile file with instructions on how to build the container that will be used for my instance. I use Supervisord to start and monitor my application.
Here's how it looks:
# my-app-repo/Dockerfile
FROM ubuntu:22.04
RUN apt update
RUN apt -y install software-properties-common
RUN apt install -y supervisor
COPY supervisord.conf /etc/supervisord.conf
RUN chmod 0700 /etc/supervisord.conf
COPY ./my-app /home/my-app
RUN chmod u+x /home/my-app
EXPOSE 443/tcp # HTTPS
EXPOSE 9001/tcp # supervisord support
CMD ["supervisord", "-c", "/etc/supervisord.conf"]
The second part of this is to build the docker container and upload to the Google Container Registry. I do this via CloudBuild. Here's the corresponding Pulumi code (building a Golang app):
Note: make sure you've connected the repo via the CloudBuild section of the GCP website first
const myImageName = pulumi.interpolate`gcr.io/${project}/my-image-name`
const buildTrigger = new gcp.cloudbuild.Trigger("my-app-build-trigger", {
name: "my-app",
description: "Builds My App image",
build: {
steps: [
{
name: "golang",
id: "build-server",
entrypoint: "bash",
timeout: "300s",
args: ["-c", "go build"],
},
{
name: "gcr.io/cloud-builders/docker",
id: "build-docker-image",
args: [
"build",
"-t", pulumi.interpolate`${myImageName}:$BRANCH_NAME-$REVISION_ID`,
"-t", pulumi.interpolate`${myImageName}:latest`,
'.',
],
},
],
images: [myImageName]
},
github: {
name: "my-app-repo",
owner: "MyGithubUsername",
push: {
branch: "^main$"
}
},
});
#2 Creating an Instance Template
As I haven't been able to figure out how to easily create an instance template via Pulumi, I decided to use the Google SDK via the gcloud commandline tool.
gcloud compute instance-templates create-with-container my-template-name-01 \
--region us-central1 \
--container-image=gcr.io/my-project/my-image-name:main-e286d94217719c3be79aac1cbd39c0a629b84de3 \
--machine-type=e2-micro \
--network=my-network-name-59c9c08 \
--tags=my-tag-name \
--service-account=my-service-account#my-project.iam.gserviceaccount.com
The values for above (container, network name etc) I got simply by browsing my project on the GCP website.
#3 Creating the Managed Instance Group
Having created an instance template you can now reference that template within your Pulumi script
const myHealthCheck = new gcp.compute.HealthCheck("my-app-health-check", {
checkIntervalSec: 5,
timeoutSec: 5,
healthyThreshold: 2,
unhealthyThreshold: 5,
httpHealthCheck: {
requestPath: "/health-check",
port: 80,
},
});
const instanceGroupManager = new gcp.compute.InstanceGroupManager("my-app-instance-group", {
baseInstanceName: "my-app-name-prefix",
zone: hostZone,
targetSize: 2,
versions: [
{
name: "my-app",
instanceTemplate: "https://www.googleapis.com/compute/v1/projects/my-project/global/instanceTemplates/my-template-name-01",
},
],
autoHealingPolicies: {
healthCheck: myHealthCheck.id,
initialDelaySec: 300,
},
});
For completeness, I've also included another part of my Pulumi script which creates a backend service and connects it to the instance group created above via the InstanceGroupManager call. Note that the Load Balancer in this example is using TCP instead of HTTPS (My App is handling SSL connections and thus uses a TCP Network Load Balancer).
const backendService = new gcp.compute.RegionBackendService("my-app-backend-service", {
region: hostRegion,
enableCdn: false,
protocol: "TCP",
backends: [{
group: instanceGroupManager.instanceGroup,
}],
healthChecks: defaultHttpHealthCheck.id,
loadBalancingScheme: "EXTERNAL",
});
const myForwardingRule = new gcp.compute.ForwardingRule("my-app-forwarding-rule", {
description: "HTTPS forwarding rule",
region: hostRegion,
ipAddress: myIPAddress.address,
backendService: backendService.id,
portRange: "443",
});
Note: Ideally step #2 would be done with Pulumi as well however I haven't worked that part out just yet.
I have created VM instance in GCP using pulumi and installed docker. And I am trying to connect remote instance of the docker but its getting failed to due to connection establishment (asking for a key verification in a pop up window).
const remoteInstance = new docker.Provider(
"remote",
{
host: interpolate`ssh://user#${externalIP}:22`,
},
{ dependsOn: dockerInstallation }
);
I can able to run docker containers locally. But want to run the same in VM. The code snippet is here
with the recent version of "#pulumi/docker": "^3.2.0" you can now pass the ssh options. Reference
const remoteInstance = new docker.Provider(
"remote",
{
host: interpolate`ssh://user#${externalIP}:22`,
sshOpts: [
"-o",
"StrictHostKeyChecking=no",
"-o",
"UserKnownHostsFile=/dev/null",
],
},
{ dependsOn: dockerInstallation }
);
I was able to build a multiarch image successfully from an M1 Macbook which is arm64.
Here's my docker file and trying to run from a raspberrypi aarch64/arm64 and I am getting this error when running the image: standard_init_linux.go:228: exec user process caused: exec format error
Editing the post with the python file as well:
FROM frolvlad/alpine-python3
RUN pip3 install docker
RUN mkdir /hoster
WORKDIR /hoster
ADD hoster.py /hoster/
CMD ["python3", "-u", "hoster.py"]
#!/usr/bin/python3
import docker
import argparse
import shutil
import signal
import time
import sys
import os
label_name = "hoster.domains"
enclosing_pattern = "#-----------Docker-Hoster-Domains----------\n"
hosts_path = "/tmp/hosts"
hosts = {}
def signal_handler(signal, frame):
global hosts
hosts = {}
update_hosts_file()
sys.exit(0)
def main():
# register the exit signals
signal.signal(signal.SIGINT, signal_handler)
signal.signal(signal.SIGTERM, signal_handler)
args = parse_args()
global hosts_path
hosts_path = args.file
dockerClient = docker.APIClient(base_url='unix://%s' % args.socket)
events = dockerClient.events(decode=True)
#get running containers
for c in dockerClient.containers(quiet=True, all=False):
container_id = c["Id"]
container = get_container_data(dockerClient, container_id)
hosts[container_id] = container
update_hosts_file()
#listen for events to keep the hosts file updated
for e in events:
if e["Type"]!="container":
continue
status = e["status"]
if status =="start":
container_id = e["id"]
container = get_container_data(dockerClient, container_id)
hosts[container_id] = container
update_hosts_file()
if status=="stop" or status=="die" or status=="destroy":
container_id = e["id"]
if container_id in hosts:
hosts.pop(container_id)
update_hosts_file()
def get_container_data(dockerClient, container_id):
#extract all the info with the docker api
info = dockerClient.inspect_container(container_id)
container_hostname = info["Config"]["Hostname"]
container_name = info["Name"].strip("/")
container_ip = info["NetworkSettings"]["IPAddress"]
if info["Config"]["Domainname"]:
container_hostname = container_hostname + "." + info["Config"]["Domainname"]
result = []
for values in info["NetworkSettings"]["Networks"].values():
if not values["Aliases"]:
continue
result.append({
"ip": values["IPAddress"] ,
"name": container_name,
"domains": set(values["Aliases"] + [container_name, container_hostname])
})
if container_ip:
result.append({"ip": container_ip, "name": container_name, "domains": [container_name, container_hostname ]})
return result
def update_hosts_file():
if len(hosts)==0:
print("Removing all hosts before exit...")
else:
print("Updating hosts file with:")
for id,addresses in hosts.items():
for addr in addresses:
print("ip: %s domains: %s" % (addr["ip"], addr["domains"]))
#read all the lines of thge original file
lines = []
with open(hosts_path,"r+") as hosts_file:
lines = hosts_file.readlines()
#remove all the lines after the known pattern
for i,line in enumerate(lines):
if line==enclosing_pattern:
lines = lines[:i]
break;
#remove all the trailing newlines on the line list
if lines:
while lines[-1].strip()=="": lines.pop()
#append all the domain lines
if len(hosts)>0:
lines.append("\n\n"+enclosing_pattern)
for id, addresses in hosts.items():
for addr in addresses:
lines.append("%s %s\n"%(addr["ip"]," ".join(addr["domains"])))
lines.append("#-----Do-not-add-hosts-after-this-line-----\n\n")
#write it on the auxiliar file
aux_file_path = hosts_path+".aux"
with open(aux_file_path,"w") as aux_hosts:
aux_hosts.writelines(lines)
#replace etc/hosts with aux file, making it atomic
shutil.move(aux_file_path, hosts_path)
def parse_args():
parser = argparse.ArgumentParser(description='Synchronize running docker container IPs with host /etc/hosts file.')
parser.add_argument('socket', type=str, nargs="?", default="tmp/docker.sock", help='The docker socket to listen for docker events.')
parser.add_argument('file', type=str, nargs="?", default="/tmp/hosts", help='The /etc/hosts file to sync the containers with.')
return parser.parse_args()
if __name__ == '__main__':
main()
A "multiarch" Python interpreter built on MacOS is intended to target MacOS-on-Intel and MacOS-on-Apple's-arm64.
There is absolutely no binary compatibility with Linux-on-Apple's-arm64, or with Linux-on-aarch64. You can't run MacOS executables on Linux, no matter if the architecture matches or not.
This happens when you build the image in a machine(host) whose operating system/platform that is different from the platform that you want to spin up the containers.
The solution is to build the docker image using the same machine/operating system(that needs to run it/that needs to spin up the container(s)).
In my case I build NodeJS, Python, Nginx, redis and postgres images in an OSX host and was trying to spin up containers from the images in an Ubuntu debian host.
I solved by building the images in Ubuntu debian and span up the containers in the same platform(Ubuntu debian)
I have a very strange issue with my Rust program that uses the rocket-rs library.
The issue I am facing is that when I try and build my program in a Docker container using a Dockerfile I created, some parts of the config I set out in the rocket.toml file is not applied. More specifically, I have set the log level option to critical in the config file and that is working but the address option I have set in the config file is not applied.
What is wierd is that I can build and all the options are applied on my local machine properly but not in the container.
Output when I build and run the program on my machine (no docker):
Configured for release.
>> address: 0.0.0.0
>> port: 8000
>> workers: 12
>> ident: Rocket
>> keep-alive: 5s
>> limits: bytes = 8KiB, data-form = 2MiB, file = 1MiB, form = 32KiB, json = 1MiB, msgpack = 1MiB, string = 8KiB
>> tls: disabled
>> temp dir: C:\Users\Nlanson\AppData\Local\Temp\
>> log level: critical
>> cli colors: true
>> shutdown: ctrlc = true, force = true, grace = 2s, mercy = 3s
Output when I build and run the program in a docker container:
Configured for release.
>> address: 127.0.0.1 //This is what I do not want
>> port: 8000
>> workers: 2
>> ident: Rocket
>> keep-alive: 5s
>> limits: bytes = 8KiB, data-form = 2MiB, file = 1MiB, form = 32KiB, json = 1MiB, msgpack = 1MiB, string = 8KiB
>> tls: disabled
>> temp dir: /tmp
>> log level: critical
>> cli colors: true
>> shutdown: ctrlc = true, force = true, signals = [SIGTERM], grace = 2s, mercy = 3s
Here is the Dockerfile I am using:
FROM rust as builder
WORKDIR /app
COPY . .
RUN cargo build --release
FROM rust as runtime
WORKDIR /app
COPY --from=builder /app/target/release/server .
COPY --from=builder /app/database.db .
EXPOSE 8000
CMD ["./server"]
and my rocket config file:
[global]
#address is not applied
address = "0.0.0.0"
#log level is applied
log_level = "critical"
I have tried a few things to trouble shoot this issue:
Run the container with docker run -it <container name> bash and check that all the required files including the config file is copied into the container
Build the program in the container through bash using different options.
Please let me know if I am missing any details.
Thanks in advance
You can create environment variable named ROCKET_ADDRESS in dockerfile. I am sharing an example
ENV_ROCKET_ADDRESS=0.0.0.0
EXPOSE 8000
CMD ["./server"]
I have one container that is serving http on port 4000.
it has socket server attached
docker-compose:
dashboard-server:
image: enginetonic:compose1.2
container_name: dashboard-server
command: node src/service/endpoint/dashboard/dashboard-server/dashboard-server.js
restart: on-failure
ports:
- 4000:4000
integration-test:
image: enginetonic:compose1.2
container_name: integration-test
testRegex "(/integration/.*|(\\.|/)(integration))\\.jsx?$$"
tty: true
server:
const http = require('http').createServer(handler)
const io = Io(http)
io.on('connection', socket => {
logger.debug('socket connected')
})
io.use((socket, next) => {
logger.debug('socket connection established.')
})
http.listen(4000, '127.0.0.1', () => {
console.log(
`Server running at http://127.0.0.1:4000/`
)
output in docker:
Server running at http://127.0.0.1:4000/
https is listening: true
Now, I am trying to connect to this server from another container like this:
file:
const url = `ws://dashboard-server:4000`
const ioc = IoC.connect(url)
ioc.on('error', error => {
console.log(error.message)
})
ioc.on('connect', res => {
console.log('connect')
})
ioc.on('connect_error', (error) => {
console.log(error.message)
})
output:
xhr poll error
When I run both locally in terminal, I get correct response
{"message":"socket connection established","level":"debug"}
Why isnt socket making connection inside container, but locally it is?
What am I doing wrong?
edit: only part of files are displayed for readability. socket connects normaly on local machine with with spawning both files in separate terminals
You need to link the docker containers and refer to them by name, not 127.0.0.1. https://docs.docker.com/compose/networking provides more doc. You'll also need to listen to '0.0.0.0' so that you accept connections across the docker network.
I only see one container in your compose file. If you're trying to connect to the docker containers from outside docker, you'll have to expose a port. The same reference shows you how.
http.listen(4000, '127.0.0.1', () => {
should become
http.listen(4000, '0.0.0.0', () => {
so that the server is listening on all addresses, including the address that docker is automatically allocating on a docker network.
Then the client has to refer to the server by the name given in docker compose, so
const url = `ws://127.0.0.1:4000`
becomes
const url = `ws://dashboard-server:4000`