I have a scenario where my stack creates dynamodb and other resources.
The resource name is controlled by an input parameter except the dynamodb must be the same one.
if I run the cdk deploy --context name=a, then it success.
if I run the cdk deploy again, with: cdk deploy --context name=b, then it fails because the dynamodb is already created.
Is there a way to manage the same dynamdb with my use case? or what is the best way to improve?
Thanks.
If I understand correctly, you want to create dynamodb once, but the rest of the stack multiple times. I would just separate the dynamodb in a separate stack, and add the stack as a dependency for the current stack. That way, you could create the current stack with as many parameters as you want, without disturbing the dynamodb.
Related
I have a project FE, BE and a DB.
now all the tutorials I found use the three in one file.
Now should it be the DB in one docker-compose file and the BE and FE in another?
Or should it be one file per project with DB, FE and BE?
[UPDATE]
The stack I'm using is Spring Boot, Postgres and Angular.
Logically your application has two parts. The front-end runs in the browser, and it makes HTTP requests to the back-end. The database is an implementation detail of the back-end and not something you separately need to manage.
So I'd consider two possible Compose layouts:
This is "one application", and there is one docker-compose.yml that manages all of it.
The front- and back-end are managed separately, since they are two separate components with a network API. You have a frontend/docker-compose.yml that manages the front-end, and a backend/docker-compose.yml that manages the back-end and its associated database.
Typical container style is not to have a single shared database. Since it's easy to launch an isolated database in a container, you'd generally have a separate database per application (or per microservice if you're using that style). Of the options you suggest, a separate Compose file only launching a standalone database is the one I'd least consider.
You haven't described your particular tech stack here, but another consideration is that you may not need a container for the front-end. If it's a plain React application, the "classic" development flow of compiling it to static files and publishing them via any convenient HTTP service still works with a Docker-based backend. You get some advantages from this path like Webpack's hashed build files, so you can avoid disrupting running clients when you push out a new build. If you separate the front- and back-ends initially, then you can change the way the front-end is deployed without affecting the back-end at all.
How to deploy a specific AWS resource using serverless framework?
I know it supports deploying specific lambda functions using sls deploy -f <function>. Wondering if there is similar option to target AWS resource?
In my use case, I have an API, ~50 Lambdas, Dynamodb, SQS, Cognito user pools etc. Each time I make a change to Cognito (or anything other than lambda code), I have to run complete sls deploy which takes ~10-15 minutes. Wondering if there is a way to skip complete deploy.
There is no way to skip full deployment, as Serverless Framework uses CloudFormation under the hood and it has to update the whole stack. The only resource that you can update separately are functions, as you mentioned, but it's only intended for development and it does not recognize all properties during an update.
I have a serverless project with 3 "layers" - api, services and db. Each layer is just a set of function deployed individually (I have setting package.individually === true in .serverless.yml). All layers able to communicate using invocation mechanism from the top (api) to the bottom (db). Only api layer has API Gateway URL, all functions in other layers do not need to be exposed by API url.
Now project grow and we have more developers. I want to prevent issues when somebody uses const accountDb = require('../db/account') in, say, api modules (api must call db layer only through invocation wrapper).
I'd like to split single serverless project to 3 different projects but stuck on local running. I can run they locally on different ports but unable to invoke lambdas in db project from api one. It is clear why.
Question: is it possible to call one lambda in project1 from lambda in project2 while both running locally without exposing API url (I know that I can call it by AJAX).
Absolutely! You'll need to used the aws-sdk in your project to make the lambda-to-lambda call both locally and in AWS. You'll then need to use serverless-offline-lambda-invoke to make the call work offline (note the endpoint configuration option which you'll need to set locally).
I am currently designing a cloud service that will serve many users per instance. I would like someone to be able to create their own instance from a webpage which will start up a docker setup with custom naming, and settings using environment variables.
My question is what tools are out there that would allow me to create for example custom docker-compose files with the details provided by the user and fire up an instance for them dynamically? Also if I was to use a subdomain for each instance, how would I dynamically create the mappings for that?
I would be looking to deploy on AWS using docker and NGINX.
Thanks
Iam working on a project of big data, where Iam trying to get tweets from Twitter and analyse these tweets and make predictions out of it.
I have followed this tutorial : http://blog.cloudera.com/blog/2012/10/analyzing-twitter-data-with-hadoop-part-2-gathering-data-with-flume/
for getting the tweets. Now Iam planning to build a microservice which can replicate itself as I increase the number of topics on which I want tweets. Now whatever code I have written to gather the tweets with that I want to make a microservice that can take a keyword and create a instance of that code for that keyword and gather tweets, for each keyword an instance should be created.
It will also be helpful if you inform me what tools to use for such application.
Thank you.
I want to make a microservice that can take a keyword and create a instance of that code for that keyword and gather tweets, for each keyword an instance should be created.
You could use kubernetes as an underlying cluster/deployment infrastructure. It has an API that allows you to deploy new services programmatically. So what you would have to do is:
Set up a basic service container for your twitter-service that is available in a container repository.
Then you deploy a first service based on your container. The service configuration will contain the keyword that the service uses as well as information about the kubernetes cluster (how to access the cluster API and where to find the container in the repository).
Now your first service has all the information it needs to automatically create additional service descriptions for kubernetes (with other key words) and deploy those additional services by calling the kubernetes cluster API.
Since the additional services will be passed all the necessary information as well, they themselves can then start even more services and so on.
You probably need to put some effort into figuring out the cluster provisioning, but that can also be done automatically with auto-scaling (available for Google or AWS clouds for example).
A different approach would be to run a horizontally scaled cluster of your basic twitter services that use a self organization algorithm to involve all the keywords put into a database or event queue.