AWS CDK Create API Gateway using OpenAPI Spec with Private Endpoint - aws-cdk

I am trying to work out the correct way to attach a VPC Endpoint to an API gateway using an OpenAPI Spec.
Here is what I have so far:
CDK:
var apiDefinition = ApiDefinition.FromAsset("./openapi_replaced.yaml");
var api = new SpecRestApi(this, "api", new SpecRestApiProps {
ApiDefinition= apiDefinition,
});
OpenAPI Spec:
servers:
- url: ""
x-amazon-apigateway-endpoint-configuration:
vpcEndpointIds:
- vpce-1111aaaa00001111
Will this attach the VPC Endpoint to my API Gateway as a Private Endpoint?

Related

Terraform docker_registry_image error: 'unable to get digest: Got bad response from registry: 400 Bad Request'

I am trying to use CDF for terraform to build and push a docker image to AWS ECR. I have decided to use terraform docker provider for it. Here is my code
class MyStack extends TerraformStack {
constructor(scope: Construct, name: string) {
super(scope, name);
const usProvider = new aws.AwsProvider(this, "us-provider", {
region: "us-east-1",
defaultTags: {
tags: {
Project: "CV",
Name: "CV",
},
},
});
const repo = new aws.ecr.EcrpublicRepository(this, "docker-repo", {
provider: usProvider,
repositoryName: "cv",
forceDestroy: true,
});
const authToken = new aws.ecr.DataAwsEcrpublicAuthorizationToken(
this,
"auth-token",
{
provider: usProvider,
}
);
new docker.DockerProvider(this, "docker-provider", {
registryAuth: [
{
address: repo.repositoryUri,
username: authToken.userName,
password: authToken.password,
},
],
});
new docker.RegistryImage(this, "image-on-public-ecr", {
name: repo.repositoryUri,
buildAttribute: {
context: __dirname,
},
});
}
}
But during deployment, I have this error: Unable to create image, image not found: unable to get digest: Got bad response from registry: 400 Bad Request. But it still is able to push to the registry, I can see it from the AWS console.
I can't seem to find any mistake in my code, and I don't understand the error. I hope you can help
In the Terraform execution model is build so that Terraform first finds all the information it needs to get the current state of your infrastructure and the in a second step calculates the plan of changes that need to be applied to get the current state into the that you described through your configuration.
This poses a problem here, the provider you declare is using information that is only available once the plan is being put into action, there is no repo url / auth token before the ECR repo is being created.
There a different ways to solve this problem: You can make use of the cross-stack references / multi-stack feature and split the ECR repo creation into a separate TerraformStack that deploys beforehand. You can pass a value from that stack into your other stack and use it to configure the provider.
Another way to solve this is by building and pushing your image outside of the terraform provider through the null provider with a local provisioner as it's done in the docker-aws-ecs E2E example

Creating / Getting a Cloud Run Job using the Python API Client Library

I created a Cloud Run Job using command line:
gcloud --verbosity=debug beta run jobs create my-job \
--image=us-docker.pkg.dev/cloudrun/container/job:latest
When I can list the jobs using the API Client library, my-job is returned:
import googleapiclient.discovery
with googleapiclient.discovery.build('run', 'v1') as client:
request = client.namespaces().jobs().list(parent=f'namespaces/my-project')
response = request.execute()
print(response)
However, when I try to get the job using the following snippet, I get 404 "Requested entity was not found":
...
request = client.namespaces().jobs().get(name='namespaces/my-project/jobs/my-job')
response = request.execute()
...
I am also unable to create a job using the following snippet, this again return 404 "Requested entity was not found":
request = client.namespaces().jobs().create(parent=f'namespaces/my-project',
body={
"metadata": {
"name": "my-job2",
},
"spec": {
"template": {
"spec": {
"template": {
"spec": {
"containers": [{
"image": "us-docker.pkg.dev/cloudrun/container/job:latest"
}],
}
}
}
}
},
})
I have Cloud Run Admin permissions for the project.
What am I missing?
Looking into the API reference, it appears you are using the call correctly 1, 2. When doing the get call, you need to specify the correct regional endpoint. So, it might happen that you are using the global endpoint for the list, create and get calls. However, make sure that you use the regional endpoint for the get and create call.
The global endpoint has this documentation for v14: "For v1, this endpoint only supports Global List: use regional endpoints instead."
You can see an example using this command from cloud shell. (This assumes your region is 'us-central1', if not that needs to be updated).
curl -X GET
-H "Authorization: Bearer "$(gcloud auth application-default print-access-token)
https://us-central1-run.googleapis.com/apis/run.googleapis.com/v1/namespaces/my-project/jobs/my-job
curl -X GET
-H "Authorization: Bearer "$(gcloud auth application-default print-access-token)
https://run.googleapis.com/apis/run.googleapis.com/v1/namespaces/my-project/jobs/my-job

What url should I use to make a request to the container from the browser during development?

I have two services, a client and a server. I am using Next.js and React on the client and express for my server. And I have a docker-compose file. I need to implement some endpoints on the backend and make requests from the client to the backend using axios.
During development I am running docker-compose up. While working on the app I created an address form in the client and wanted to see the results in the browser. When I try to submit the form and send the request to the server I am getting 404. This is the code in the client that makes a request to the backend:
import axios from 'axios'
const postNewAddress = async (address) => {
axios.post('/address', address)
.then(function (response) {
console.log(response);
})
.catch(function (error) {
console.log(error);
});
}
module.exports = {
postNewAddress
}
And this is what I currently have on the backend:
const express = require( 'express' );
const app = express();
const port = process.env.PORT || 3001
app.use(express.json())
app.get( '/', ( req, res ) => {
res.send({ greeting: 'Hello world!' });
});
app.post('/address', ( req, res ) => {
const address = req.body
console.log(address)
res.json(address)
})
app.listen(port, err => {
if (err) throw err;
console.log(`Listening on PORT ${port}!`)
})
When I change the URL to http://server:3001/address in axios request, I am getting net::ERR_NAME_NOT_RESOLVED error. I did some research and that happens probably because the browser and docker containers are running in different networks. But I couldn't find any solution that would allow me to make requests to container from a browser.
Here is the gist for docker-compose.yml
Docker compose file
Let's say you have one property in variable address with which post request was made.
address = { 'id': 123 };
Now to fetch that in your backend code you need to do something like this
app.post('/address', ( req, res ) => {
const id = req.body.id
console.log(id)
res.json(id)
})
Browser applications can never use the Docker-container hostnames. Even if the application is being served from an inside Docker, it ultimately runs from inside the browser, and outside Docker space.
If this is a development system, so the backend container and your development environment are on the same system, you can generally connect to localhost and the published ports: of your container. If your docker-compose.yml declares ports: [3001:3001] for the backend, then you can connect to http://localhost:3001/address.
You can also set this address in the Webpack dev server proxy configuration, so the /address relative URL you have in your code now continues to work.

Generate swagger 2.0 yaml using swagger 4.x package

I am going to integrate my API server to Google Cloud Endpoints.
And Google Cloud Endpoints supports swagger 2.0 as of now.
But my dependencies/libraries are up versions now. So I want to generate swagger 2.0 yaml file without downgrading swagger library version (api end points are already described with swagger 4.x - openapi 3.0 spec).
Nestjs and swagger dependencies (package.json):
...
"#nestjs/common": "^7.0.0",
"#nestjs/config": "^0.4.0",
"#nestjs/core": "^7.0.0",
"#nestjs/platform-express": "^7.0.0",
"js-yaml": "^3.14.0",
...
"#nestjs/swagger": "^4.5.4",
"swagger-ui-express": "^4.1.4",
...
And swagger generator script:
import { NestFactory } from '#nestjs/core';
import { AppModule } from './app.module';
import * as fs from 'fs'
import * as yaml from 'js-yaml'
const generateSwaggerYaml = async () => {
const app = await NestFactory.create(AppModule);
const options = new DocumentBuilder()
.setTitle('API Title')
.setDescription('API Description')
.build()
const document = SwaggerModule.createDocument(app, options)
fs.writeFileSync("./openapi-run.yaml", yaml.safeDump(document))
}
generateSwaggerYaml()
And output of script is openapi 3.0 spec :(
openapi: 3.0.0
info:
title: API Title
description: API Description.
version: 1.0.0
contact: {}
tags: []
servers: []
...
Is there any option/way to generate swagger2.0 yaml from openapi 3.0 document?
How can I downgrade openapi 3.0 spec to swagger 2.0 spec?
I use this project from github for that very purpose: https://github.com/LucyBot-Inc/api-spec-converter
For openapi 3 yaml to swagger 2 yaml, it's as simple as $ api-spec-converter --from openapi_3 --syntax yaml --to swagger_2 ${f} > ${SWAGGER_V2_FILE}

Conditional resource in serverless

I would like to add an AWS resource conditionally based on presence of an env var. I tried serverless-cloudformation-parameter-setter but I get a generic error on deployment and I don't see what I need to do to fix it
I'm trying to deploy a simple lambda + SQS stack and if a env var is defined also subscribe the queue to the topic denoted by the env var - or if the var is not defined then not do that part at all, just the lambda and the queue
This is what I tried:
plugins:
- serverless-cloudformation-parameter-setter
provider:
name: aws
runtime: nodejs8.10
region: eu-west-1
functions:
update:
handler: index.update
events:
- sqs:
arn:
Fn::GetAtt:
- Queue
- Arn
custom:
cf-parameters:
SourceTopicArn: "${env:UPDATE_SNS_ARN}"
resources:
Parameters:
SourceTopicArn:
Type: string
Resources:
Queue:
Type: "AWS::SQS::Queue"
Subscription:
Type: "AWS::SNS::Subscription"
Condition: SourceTopicArn
Properties:
TopicArn:
Ref: SourceTopicArn
Endpoint:
Ref: Queue
The error I receive is: The CloudFormation template is invalid: Template format error: Unrecognized parameter type: string
If I remove all the parameter stuff it works fine
The Type has to be String, not string. See the supported parameter data types section in the docs.

Resources