How to connect remote docker instance using Pulumi? - docker

I have created VM instance in GCP using pulumi and installed docker. And I am trying to connect remote instance of the docker but its getting failed to due to connection establishment (asking for a key verification in a pop up window).
const remoteInstance = new docker.Provider(
"remote",
{
host: interpolate`ssh://user#${externalIP}:22`,
},
{ dependsOn: dockerInstallation }
);
I can able to run docker containers locally. But want to run the same in VM. The code snippet is here

with the recent version of "#pulumi/docker": "^3.2.0" you can now pass the ssh options. Reference
const remoteInstance = new docker.Provider(
"remote",
{
host: interpolate`ssh://user#${externalIP}:22`,
sshOpts: [
"-o",
"StrictHostKeyChecking=no",
"-o",
"UserKnownHostsFile=/dev/null",
],
},
{ dependsOn: dockerInstallation }
);

Related

Pulumi on GCP - How to create a Managed Instance Group with Docker Container Instances

I've been attempting to create a managed instance group on GCP which consists of instances that host a custom docker image. However I'm struggling to figure out how to do this with Pulumi.
Reading Google's GCP documentation it's possible to deploy instances that host a docker container within a managed instance group via instance templates.
Practically with gcloud this looks like:
gcloud compute instance-templates create-with-container TEMPLATE_NAME --container-image DOCKER_IMAGE
Reading Pulumi's instance template documentation however, it's not clear how to create an instance template which would do the same thing as the command above.
Is it possible in Pulumi to create a managed instance group where the instances host a custom docker image, or will I have to do something like create an instance template manually, and refer to that within my Pulumi script?
Here's a hybrid approach that utilises both gcloud and Pulumi.
At a high level:
Create a docker container and upload to the Google Container Registry
Create an instance template using gcloud
Create a managed instance group, referencing the instance template from within the Pulumi script
#1 Creating the Docker Container
Use CloudBuild to detect changes within a Git repo, create a docker container, and upload it to the Google Container Registry.
Within my repo I have a Dockerfile file with instructions on how to build the container that will be used for my instance. I use Supervisord to start and monitor my application.
Here's how it looks:
# my-app-repo/Dockerfile
FROM ubuntu:22.04
RUN apt update
RUN apt -y install software-properties-common
RUN apt install -y supervisor
COPY supervisord.conf /etc/supervisord.conf
RUN chmod 0700 /etc/supervisord.conf
COPY ./my-app /home/my-app
RUN chmod u+x /home/my-app
EXPOSE 443/tcp # HTTPS
EXPOSE 9001/tcp # supervisord support
CMD ["supervisord", "-c", "/etc/supervisord.conf"]
The second part of this is to build the docker container and upload to the Google Container Registry. I do this via CloudBuild. Here's the corresponding Pulumi code (building a Golang app):
Note: make sure you've connected the repo via the CloudBuild section of the GCP website first
const myImageName = pulumi.interpolate`gcr.io/${project}/my-image-name`
const buildTrigger = new gcp.cloudbuild.Trigger("my-app-build-trigger", {
name: "my-app",
description: "Builds My App image",
build: {
steps: [
{
name: "golang",
id: "build-server",
entrypoint: "bash",
timeout: "300s",
args: ["-c", "go build"],
},
{
name: "gcr.io/cloud-builders/docker",
id: "build-docker-image",
args: [
"build",
"-t", pulumi.interpolate`${myImageName}:$BRANCH_NAME-$REVISION_ID`,
"-t", pulumi.interpolate`${myImageName}:latest`,
'.',
],
},
],
images: [myImageName]
},
github: {
name: "my-app-repo",
owner: "MyGithubUsername",
push: {
branch: "^main$"
}
},
});
#2 Creating an Instance Template
As I haven't been able to figure out how to easily create an instance template via Pulumi, I decided to use the Google SDK via the gcloud commandline tool.
gcloud compute instance-templates create-with-container my-template-name-01 \
--region us-central1 \
--container-image=gcr.io/my-project/my-image-name:main-e286d94217719c3be79aac1cbd39c0a629b84de3 \
--machine-type=e2-micro \
--network=my-network-name-59c9c08 \
--tags=my-tag-name \
--service-account=my-service-account#my-project.iam.gserviceaccount.com
The values for above (container, network name etc) I got simply by browsing my project on the GCP website.
#3 Creating the Managed Instance Group
Having created an instance template you can now reference that template within your Pulumi script
const myHealthCheck = new gcp.compute.HealthCheck("my-app-health-check", {
checkIntervalSec: 5,
timeoutSec: 5,
healthyThreshold: 2,
unhealthyThreshold: 5,
httpHealthCheck: {
requestPath: "/health-check",
port: 80,
},
});
const instanceGroupManager = new gcp.compute.InstanceGroupManager("my-app-instance-group", {
baseInstanceName: "my-app-name-prefix",
zone: hostZone,
targetSize: 2,
versions: [
{
name: "my-app",
instanceTemplate: "https://www.googleapis.com/compute/v1/projects/my-project/global/instanceTemplates/my-template-name-01",
},
],
autoHealingPolicies: {
healthCheck: myHealthCheck.id,
initialDelaySec: 300,
},
});
For completeness, I've also included another part of my Pulumi script which creates a backend service and connects it to the instance group created above via the InstanceGroupManager call. Note that the Load Balancer in this example is using TCP instead of HTTPS (My App is handling SSL connections and thus uses a TCP Network Load Balancer).
const backendService = new gcp.compute.RegionBackendService("my-app-backend-service", {
region: hostRegion,
enableCdn: false,
protocol: "TCP",
backends: [{
group: instanceGroupManager.instanceGroup,
}],
healthChecks: defaultHttpHealthCheck.id,
loadBalancingScheme: "EXTERNAL",
});
const myForwardingRule = new gcp.compute.ForwardingRule("my-app-forwarding-rule", {
description: "HTTPS forwarding rule",
region: hostRegion,
ipAddress: myIPAddress.address,
backendService: backendService.id,
portRange: "443",
});
Note: Ideally step #2 would be done with Pulumi as well however I haven't worked that part out just yet.

Hashicorp Vault docker networking issue

When setting up on a brand new EC2 server as a test I run the following and it all works fine.
/vault/config/local.json
{
"listener": [{
"tcp": {
"address": "0.0.0.0:8200",
"tls_disable": 1
}
}],
"storage": {
"file": {
"path": "/vault/data"
}
},
"max_lease_ttl": "10h",
"default_lease_ttl": "10h",
"ui": true
}
docker run -d -p 8200:8200 -v /home/ec2-user/vault:/vault --cap-add=IPC_LOCK vault server
export VAULT_ADDR='http://0.0.0.0:8200'
vault operator init
I unseal and login fine.
On one of our corporate test servers I use 0.0.0.0 and I get a web server busy sorry page on the init. However, if I export 127.0.0.1 the init works fine. I cannot access the container from the server command line with a curl with 0.0.0.0 or 127.0.0.1. I'm unsure why the behaviours are different?
I understand that 127.0.0.1 should not work but why am I get server busy on 0.0.0.0 on one server and not another in the actual container?
Thanks Mark
The listener works fine in the container with 0.0.0.0. To access the container externally you need to VAULT_ADDR to an address the server understands not the container.

How do I get my IP address from inside an ECS container running with the awsvpc network mode?

From a regular ECS container running with the bridge mode, or from a standard EC2 instance, I usually run
curl http://169.254.169.254/latest/meta-data/local-ipv4
to retrieve my IP.
In an ECS container running with the awsvpc network mode, I get the IP of the underlying EC2 instance which is not what I want. I want the address of the ENI attached to my container. How do I do that?
A new convenience environment variable is injected by the AWS container agent into every container in AWS ECS: ${ECS_CONTAINER_METADATA_URI}
This contains the URL to the metadata endpoint, so now you can do
curl ${ECS_CONTAINER_METADATA_URI}
The output looks something like
{
"DockerId":"redact",
"Name":"redact",
"DockerName":"ecs-redact",
"Image":"redact",
"ImageID":"redact",
"Labels":{ },
"DesiredStatus":"RUNNING",
"KnownStatus":"RUNNING",
"Limits":{ },
"CreatedAt":"2019-04-16T22:39:57.040286277Z",
"StartedAt":"2019-04-16T22:39:57.29386087Z",
"Type":"NORMAL",
"Networks":[
{
"NetworkMode":"awsvpc",
"IPv4Addresses":[
"172.30.1.115"
]
}
]
}
Under the key Networks you'll find IPv4Address
You application code can then look something like this (python)
METADATA_URI = os.environ['ECS_CONTAINER_METADATA_URI']
container_metadata = requests.get(METADATA_URI).json()
ALLOWED_HOSTS.append(container_metadata['Networks'][0]['IPv4Addresses'][0])
import * as publicIp from 'public-ip';
const publicIpAddress = await publicIp.v4(); // your container's public IP

Config Vault Docker container with Consul Docker container

I am trying to deploy Vault Docker image to work with Consul Docker image as its storage.
I have the following Json config file for the vault container:
{
"listener": [{
"tcp": {
"address": "0.0.0.0:8200",
"tls_disable" : 1
}
}],
"storage" :{
"consul" : {
"address" :"127.0.0.1:8500"
"path" :"vault/"
}
}
"max_lease_ttl": "10h",
"default_lease_ttl": "10h",
"ui": true,
}
Running consul container:
docker run -d -p 8501:8500 -it consul
and than running the vault container:
docker run -d -p 8200:8200 -v /root/vault:/vault --cap-add=IPC_LOCK vault server
Immediately after the vault container is up, it stop running, and when querying the logs I receive the following error:
Error detecting api address: Get http://127.0.0.1:8500/v1/agent/self: dial tcp 127.0.0.1:8500: connect: connection refused
Error initializing core: missing API address, please set in configuration or via environment
Any ideas why I am getting this error, and if I have any configuration problem?
Since you are running docker, the "127.0.0.1" address you are pointing to is going to be inside your docker, but consul isn't listening there, it's listening on your docker-servers localhost!
So I would recommend that you do a link (--link consul:consul) when you start vault docker, and set "address" :"consul:8500" in the config.
Or, change "address" :"127.0.0.1:8500" to "address" :"172.17.0.1:8500" to let it connect to your docker servers forwarded 8500. The IP is whatever is set on your docker0 interface. Not as nice though since it's not official and that it can be changed in the configuration, so I recommend linking.

intellij docker integration cant open ports

The docker integration has a weirdly proprietary config format and its very unpredictable and quite frustrating.
This is the command I want to run for my container:
docker run -p 9999:9999 mycontainer
Pretty much the simplest command. I can start my container with this command and see the ports open in kitmatic and access it from the host.
I tried to do this in the docker run config by clicking CLI and generated a json settings file (already wtf this is weird and convoluted)
It gave me this json:
{
"AttachStdin" : true,
"Tty" : true,
"OpenStdin" : true,
"Image" : "",
"Volumes" : { },
"ExposedPorts" : { },
"HostConfig" : {
"Binds" : [ ],
"PortBindings" : {
"9999/tcp" : [ {
"HostIp" : "",
"HostPort" : "9999"
} ]
}
},
"_comment" : ""
}
I then execute the run config and according to intellij the port is open (looking under the Port Bindings section of the docker tab). But its not open. its not accessible from host and kitmatic doesn't show it open either.
How do I get this working as a run config? How do I see what docker command intellij is actually running? Maybe its just using the API programatically.
It seems the intellij docker integration requires you to explicitly declare open ports with EXPOSE in the dockerfile.

Resources