Pulumi - how to pull a docker image from a private registry? - docker

I've declared a Kubernetes deployment which has two containers. One is built locally, another needs to be pulled from a private registry.
const appImage = new docker.Image("ledgerImage", {
imageName: 'us.gcr.io/qwil-build/ledger',
build: "../../",
});
const ledgerDeployment = new k8s.extensions.v1beta1.Deployment("ledger", {
spec: {
template: {
metadata: {
labels: {name: "ledger"},
name: "ledger",
},
spec: {
containers: [
{
name: "api",
image: appImage.imageName,
},
{
name: "ssl-proxy",
image: "us.gcr.io/qwil-build/monolith-ssl-proxy:latest",
}
],
}
}
}
});
When I run pulumi up it hangs - this is happening because of a complaint that You don't have the needed permissions to perform this operation, and you may have invalid credentials. I see this complain when I run kubectl describe <name of pod>. However, when I run docker pull us.gcr.io/qwil-build/monolith-ssl-proxy:latest it executes just fine. I've re-reun gcloud auth configure-docker and it hasn't helped.
I found https://github.com/pulumi/pulumi-cloud/issues/112 but it seems that docker.Image requires a build arg which suggests to me it's meant for local images, not remote images.
How can I pull an image from a private registry?
EDIT:
Turns out I have a local dockerfile for building the SSL proxy I need. I've declared a new Image with
const sslImage = new docker.Image("sslImage", {
imageName: 'us.gcr.io/qwil-build/ledger-ssl-proxy',
build: {
context: "../../",
dockerfile: "../../Dockerfile.proxy"
}
});
And updated the image reference in the Deployment correctly. However, I'm still getting authentication problems.

I have a solution which uses only code, which I use to retrieve images from a private repository on Gitlab:
config.ts
import { Config } from "#pulumi/pulumi";
//
// Gitlab specific config.
//
const gitlabConfig = new Config("gitlab");
export const gitlab = {
registry: "registry.gitlab.com",
user: gitlabConfig.require("user"),
email: gitlabConfig.require("email"),
password: gitlabConfig.requireSecret("password"),
}
import * as config from "./config";
import { Base64 } from 'js-base64';
import * as kubernetes from "#pulumi/kubernetes";
[...]
const provider = new kubernetes.Provider("do-k8s", { kubeconfig })
const imagePullSecret = new kubernetes.core.v1.Secret(
"gitlab-registry",
{
type: "kubernetes.io/dockerconfigjson",
stringData: {
".dockerconfigjson": pulumi
.all([config.gitlab.registry, config.gitlab.user, config.gitlab.password, config.gitlab.email])
.apply(([server, username, password, email]) => {
return JSON.stringify({
auths: {
[server]: {
auth: Base64.encode(username + ":" + password),
username: username,
email: email,
password: password
}
}
})
})
}
},
{
provider: provider
}
);
// Then use the imagePullSecret in your deployment like this
deployment = new k8s.apps.v1.Deployment(name, {
spec: {
selector: { matchLabels: labels },
template: {
metadata: { labels: labels },
spec: {
imagePullSecrets: [{ name: args.imagePullSecret.metadata.apply(m => m.name) }],
containers: [container]
},
},
},
});

Turns out running pulumi destroy --yes && pulumi up --skip-preview --yes is what I needed. I guess I was in some weird inconsistent state but this is fixed now.

D'oh! Looks like RemoteImage is the answer: https://www.pulumi.com/docs/reference/pkg/nodejs/pulumi/docker/#RemoteImage
EDIT:
I tried
const sslImage = new docker.RemoteImage("sslImage", {
name: 'us.gcr.io/qwil-build/monolith-ssl-proxy:latest',
})
And I'm still getting authentication errors so this isn't the answer I think.

You need to give your cluster the credentials to your Docker Registry, so that it can pull the images from it.
The manual process would be:
docker login registry.gitlab.com
cat ~/.docker/config.json | base64
Then create a registry_secret.yaml with the output from above
apiVersion: v1
kind: Secret
metadata:
name: regsec
data:
.dockerconfigjson: ewJImF1dGhzIjogewoJCSJyZWdpc3RyeS5naXfRsYWsi7fQoJfSwKCSJIdHRwSGVhZGVycyI6IHsKCQkdiVXNlci1BZ2VudCI6ICJEb2NrZXItQ2xpZW50LzEaLjxzLxjUgKH9yIjogInN3YXJtIgp9
type: kubernetes.io/dockerconfigjson
and then apply it to your cluster with
kubectl apply -f registry_secret.yaml && kubectl get secrets
You can wrap that into Pulumi, as it supports yaml files like
new k8s.yaml.ConfigGroup("docker-secret", {files: "registry_secret.yaml"});
This only works if you have your credentials encoded in .docker/config.json and should not work if you are using a credential store
The alternative would be to create the secret directly by providing your user credentials and extracting the token
kubectl create secret docker-registry regsec \
--docker-server=registry.gitlab.com --docker-username=... \
--docker-email=... --docker-password=... \
--dry-run -o yaml | grep .dockerconfigjson: | sed -e 's/.dockerconfigjson://' | sed -e 's/^[ \t]*//'
This token can now be stored as a pulumi secret with
pulumi config set docker_token --secret <your_token>
and be used like this
import {Secret} from "#pulumi/kubernetes/core/v1";
import {Config} from "#pulumi/pulumi";
/**
* Creates a docker registry secret to pull images from private registries
*/
export class DockerRegistry {
constructor(provider: any) {
const config = new Config();
const dockerToken = config.require("docker_token");
new Secret("docker-registry-secret", {
metadata: {
name: "docker-registry-secret"
},
data: {
".dockerconfigjson": dockerToken
},
type: "kubernetes.io/dockerconfigjson"
}, {provider});
}
}

Related

AWS CDK Code Pipelines- Why Can Local Obtain The Branch But Code Build Cannot?

My goal is to dynamically name resources to allow for multiple environments. For example, a "dev-accounts" table, and a "prod-accounts" table.
The issue I am facing is Code Build cannot dynamically name resources, whilst local can. Following the example above, I am receiving "undefined-accounts" when viewing the logs in Code Build.
Code to obtain the environment by branch name:
export const getContext = (app: App): Promise<CDKContext> => {
return new Promise(async (resolve, reject) => {
try {
const currentBranch = await gitBranch();
const environment = app.node.tryGetContext("environments").find((e: any) => e.branchName === currentBranch);
const globals = app.node.tryGetContext("globals");
return resolve({...globals, ...environment});
}
catch (error) {
return reject("Cannot get context from getContext()");
}
})
}
Further Explanation:
In the bin/template.ts file, I am using console.log to log the context, after calling const context = await getContext(app);
Local CLI outcome:
{
appName: 'appName',
region: 'eu-west-1',
accountId: '000000000',
environment: 'dev',
branchName: 'dev'
}
Code Build outcome:
{
appName: 'appName',
region: 'eu-west-1',
accountId: '000000000'
}
Note I've removed sensitive information.
This is my Code Pipeline built in the CDK:
this.codePipeline = new CodePipeline(this, `${environment}-${appName}-`, {
pipelineName: `${environment}-${appName}-`,
selfMutation: true,
crossAccountKeys: false,
role: this.codePipelineRole,
synth: new ShellStep("Deployment", {
input: CodePipelineSource.codeCommit(this.codeRepository, environment, {
codeBuildCloneOutput: true
}),
installCommands: ["npm i -g npm#latest"],
commands: [
"cd backend",
"npm ci",
"npm run build",
"cdk synth",
],
primaryOutputDirectory: "backend/cdk.out",
})
});
By using the key/value codeBuildCloneOutput: true, I believe I am completing a full clone of Code Commit repository, and thus the git metadata.
CodeBuild exposes the CODEBUILD_SOURCE_VERSION environment variable. For CodeCommit, this is "the commit ID or branch name associated with the version of the source code to be built".
const currentBranch = process.env.CODEBUILD_SOURCE_VERSION ?? await gitBranch();

How to use PM2 Process Manager with SvelteKit

I am trying to manage my SvelteKit build with PM2 (Process Manager) — my problem is that I can't succesfully inject a .env-file using an ecosystem.config.cjs. My files currently look like this:
.env.production
PORT=3000
The only changing thing in both configs is at:
env: { }
ecosystem.config.cjs (working fine - app runs on provided port)
module.exports = {
apps: [
{
name: 'my_app',
script: './build/index.js',
watch: false,
ignore_watch: ['database'],
autorestart: true,
// --------------------------------------------------
// if passed directly PORT is being used as expected:
// --------------------------------------------------
env: {
PORT: 3000
}
}
]
};
ecosystem.config.cjs (not working - injected PORT variable is being ignored)
module.exports = {
apps: [
{
name: 'my_app',
script: './build/index.js',
watch: false,
ignore_watch: ['database'],
autorestart: true,
// ----------------------------------------------------
// when I try to inject a .env it's just being ignored:
// ----------------------------------------------------
env: {
ENV_PATH: "./.env.production",
}
}
]
};
Any help is much appreciated and thanks for reading!
Cheers,
Boris
EDIT: Made question a bit more clear + added answer below
The problem wasn't the injection of .env.production, but the PORT environment variable. PORT must be provided directly and can't be part of .env.production (well, it can be but will be ignored).
There's probably another way, but the following works:
ecosystem.config.cjs
module.exports = {
apps: [
{
name: 'my_app',
script: './build/index.js',
watch: false,
ignore_watch: ['database'],
autorestart: true,
// ----------------------------------------------------
// when I try to inject a .env it's just being ignored:
// ----------------------------------------------------
env: {
PORT: 3000,
ENV_PATH: "./.env.production",
}
}
]
};
.env.production
# production
PUBLIC_test=value

How Can I Reference Environment Variables in serverless.ts?

I'm using serverless-ssm-fetch in my serverless.ts file, which resolves many of the variables that are environment specific. This works great when I'm referencing these variables in my code, however, I have two values in my serverless.ts file itself that I'd like to draw from SSM Parameter Store. Below is my serverless.ts file, and what I'm trying to do to pull in lambda-security-group-ids and lambda-subnet-ids is working, but I'm not sure how to reference them within the serverless.ts file. Does anyone know how to do this?
import type { AWS } from '#serverless/typescript';
import importFacility from '#functions/ImportFacility';
import ProcessEvent from '#functions/ProcessEvent';
const serverlessConfiguration: AWS = {
service: 'myservice',
frameworkVersion: '2',
custom: {
webpack: {
webpackConfig: './webpack.config.js',
includeModules: true,
},
bundle: {
ignorePackages: ['pg-native']
},
serverlessSsmFetch: {
DB_Host: 'database-host~true',
PORT: 'serverless-database-port~true',
DB_NAME: 'clinical-database-name~true',
DB_USER_NAME: 'database-username~true',
DB_PASSWORD: 'database-password~true',
AWS_ACCESS_KEY: 'serverless-access-key-id~true',
AWS_SECRECT_KEY: 'serverless-access-key-secret~true',
LAMBDA_SECURITY_GROUP_IDS: 'lambda-security-group-ids~true', // WANT TO REFERENCE
LAMBDA_SUBNET_IDS: 'lambda-subnet-ids~true' // WANT TO REFERENCE
}
},
plugins: ['serverless-webpack', 'serverless-ssm-fetch'],
provider: {
name: 'aws',
runtime: 'nodejs14.x',
apiGateway: {
minimumCompressionSize: 1024,
shouldStartNameWithService: true,
},
environment: {
AWS_NODEJS_CONNECTION_REUSE_ENABLED: '1'
},
lambdaHashingVersion: '20201221',
vpc: {
securityGroupIds: [`${process.env.LAMBDA_SECURITY_GROUP_IDS}`], // NOT WORKING
subnetIds: [`${process.env.LAMBDA_SUBNET_IDS}`] // NOT WORKING
}
},
functions: { importFacility, ProcessEvent },
};
module.exports = serverlessConfiguration;
For anyone wondering juste had the issue
the syntax :
vpc: {
securityGroupIds: ['${ssm:${self:custom.serverlessSsmFetch.LAMBDA_SECURITY_GROUP_IDS}}'],
subnetIds: ['${ssm:${self:custom.serverlessSsmFetch.LAMBDA_SUBNET_IDS}}]
}
worked for me.
As far as I understand you have to use the syntax as it would have been rendered while using serverless.yml template

How to connect go grpc server with dart grpc client using Envoy and Grpc_web

I'm new to grpc_web and envoy.
Please help me to setup following things,
GRPC_Go server is running on ec2 instance as a docker container
Dart web client is running on local pc
Need to make grpc call request from dart web app to grpc_go server
Used envoy proxy for the request forward. Envoy proxy is running as a container in same ec2 instance
I'm getting the following error "Response: null, trailers: {access-control-allow-credentials: true, access-control-allow-origin: http://127.0.0.1:9000, vary: Origin})".
Grpc_Go:
package main
import (
"context"
"flag"
"fmt"
"log"
"net"
"google.golang.org/grpc"
pb "google.golang.org/grpc/examples/helloworld/helloworld"
)
var (
port = flag.Int("port", 50051, "The server port")
)
// server is used to implement helloworld.GreeterServer.
type server struct {
pb.UnimplementedGreeterServer
}
// SayHello implements helloworld.GreeterServer
func (s *server) SayHello(ctx context.Context, in *pb.HelloRequest) (*pb.HelloReply, error) {
log.Printf("Received: %v", in.GetName())
return &pb.HelloReply{Message: "Hello " + in.GetName()}, nil
}
func (s *server) SayHelloAgain(ctx context.Context, in *pb.HelloRequest) (*pb.HelloReply,
error)
{
return &pb.HelloReply{Message: "Hello again " + in.GetName()}, nil
}
func main() {
flag.Parse()
lis, err := net.Listen("tcp", fmt.Sprintf(":%d", *port))
if err != nil {
log.Fatalf("failed to listen: %v", err)
}
s := grpc.NewServer()
pb.RegisterGreeterServer(s, &server{})
log.Printf("server listening at %v", lis.Addr())
if err := s.Serve(lis); err != nil {
log.Fatalf("failed to serve: %v", err)
}
}
GRPC_dart_client:
import 'package:grpc/grpc_web.dart';
import 'package:grpc_web/app.dart';
import 'package:grpc_web/src/generated/echo.pbgrpc.dart';
void main() {
final channel = GrpcWebClientChannel.xhr(Uri.parse('http://ec2-ip:8080'));
final service = EchoServiceClient(channel);
final app = EchoApp(service);
final button = querySelector('#send') as ButtonElement;
button.onClick.listen((e) async {
final msg = querySelector('#msg') as TextInputElement;
final value = msg.value!.trim();
msg.value = '';
if (value.isEmpty) return;
if (value.indexOf(' ') > 0) {
final countStr = value.substring(0, value.indexOf(' '));
final count = int.tryParse(countStr);
if (count != null) {
app.repeatEcho(value.substring(value.indexOf(' ') + 1), count);
} else {
app.echo(value);
}
} else {
app.echo(value);
}
});
}
envoy.yaml:
access_log_path: /tmp/admin_access.log
address:
socket_address: { address: 0.0.0.0, port_value: 9901 }
static_resources:
listeners:
- name: listener_0
address:
socket_address: { address: 0.0.0.0, port_value: 8080 }
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
"#type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
codec_type: auto
stat_prefix: ingress_http
route_config:
name: local_route
virtual_hosts:
- name: local_service
domains: ["*"]
routes:
- match: { prefix: "/" }
route:
cluster: echo_service
timeout: 0s
max_stream_duration:
grpc_timeout_header_max: 0s
cors:
allow_origin_string_match:
- prefix: "*"
allow_methods: GET, PUT, DELETE, POST, OPTIONS
allow_headers: keep-alive,user-agent,cache-control,content-type,content-transfer-encoding,custom-header-1,x-accept-content-transfer-encoding,x-accept-response-streaming,x-user-agent,x-grpc-web,grpc-timeout
max_age: "1728000"
expose_headers: custom-header-1,grpc-status,grpc-message
http_filters:
- name: envoy.filters.http.grpc_web
- name: envoy.filters.http.cors
- name: envoy.filters.http.router
clusters:
- name: echo_service
connect_timeout: 0.25s
type: logical_dns
http2_protocol_options: {}
lb_policy: round_robin
load_assignment:
cluster_name: cluster_0
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: app
port_value: 50051
Grpc_go_docker_file:
# Install git.
# Git is required for fetching the dependencies.
RUN apk update && apk add --no-cache git
WORKDIR /app
COPY go.mod go.sum ./
RUN go mod download
COPY . .
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o main .
# Start a new stage from scratch
FROM alpine:latest
RUN apk --no-cache add ca-certificates
WORKDIR /root/
# Copy the Pre-built binary file from the previous stage. Observe we also copied the .env file
COPY --from=builder /app/main .
# Expose port 50051 to the outside world
EXPOSE 50051
CMD ["./main"]
Envoy_Docker:
COPY envoy.yaml /etc/envoy/envoy.yaml
CMD /usr/local/bin/envoy -c /etc/envoy/envoy.yaml -l trace --log-path /tmp/envoy_info.log
I'm stuck with it more than two days, please help me. Thanks in advance
Thank you all, for your reply.
I fixed this issue with IP of the ec2 instance.
clusters:
- name: echo_service
connect_timeout: 0.25s
type: logical_dns
http2_protocol_options: {}
lb_policy: round_robin
load_assignment:
cluster_name: cluster_0
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: app
port_value: 50051
Instead of container 'address: app' (app is container name) in the envoy.yaml, I used the ip of ec2 instance and container port now envoy is forwarding the request to server.

How to import EKS secrets from AWS Secrets Manager using aws-cdk?

I have:
EKS deployed by aws-cdk script, with kubectl enabled, and apps deployed by eks.Cluster.addResource()
AWS Secrets Manager with a set of secrets I want to be available for EKS application
I tried to deploy Secret this way:
import * as sm from "#aws-cdk/aws-secretsmanager";
getSecret(secretKey: string): string {
let secretTokens = sm.Secret.fromSecretArn(scope, "ImportedSecrets", awsSecretStorageArn);
return secretTokens.secretValueFromJson(secretKey).toString();
}
createKubernetesImagePullSecrets(k8s: eks.Cluster): void {
let eksSecretStorageName = this.env.awsResourcesConfig.k8sImagePullSecretStorageName;
k8s.addResource(eksSecretStorageName, {
apiVersion: "v1",
kind: "Secret",
metadata: {
name: eksSecretStorageName,
},
data: {
".dockerconfigjson": this.getSecret('hub-secret'),
},
type: "kubernetes.io/dockerconfigjson",
});
}
I'm getting an error from CloudFormation:
Secret in version "v1" cannot be handled as a Secret: v1.Secret.ObjectMeta: v1.ObjectMeta.TypeMeta: Kind: Data: decode base64: illegal base64 data at input byte 0
This happens because the secret token is not expanded and the ".dockerconfigjson" field value, in this case, looks like ${Token[TOKEN.417]}
Is there a way to deploy the EKS Secret resource and expand secret tokens correctly during deployment?
I created a temporary workaround for this, by downloading a plain-text version of secrets with aws-cli. Not a safe way, but works. Do not use this if you have a more secure solution.
import { execSync } from "child_process";
extractSecretValues(awsSecretStorageArn: string) : Map<string, string> {
let map = new Map<string, string>();
let secretsContent = execSync(`aws secretsmanager get-secret-value --secret-id ${awsSecretStorageArn}`).toString();
let secrets = JSON.parse(secretsContent);
if (!secrets)
throw new Error(`Secret values could not be extracted from ${awsSecretStorageArn}`);
if (secrets.SecretString) {
let secretValuesObj = JSON.parse(secrets.SecretString);
for (let [secretKey, secretValue] of Object.entries<string>(secretValuesObj)) {
map.set(secretKey, secretValue);
}
}
return map;
}
let secretValueMap = extractSecretValues();
createKubernetesImagePullSecrets(k8s: eks.Cluster): void {
let eksSecretStorageName = this.env.awsResourcesConfig.k8sImagePullSecretStorageName;
k8s.addResource(eksSecretStorageName, {
apiVersion: "v1",
kind: "Secret",
metadata: {
name: eksSecretStorageName,
},
data: {
".dockerconfigjson": secretValueMap.get('hub-secret'),
},
type: "kubernetes.io/dockerconfigjson",
});
}

Resources