Serverless interlambda local communication - serverless

I have a serverless project with 3 "layers" - api, services and db. Each layer is just a set of function deployed individually (I have setting package.individually === true in .serverless.yml). All layers able to communicate using invocation mechanism from the top (api) to the bottom (db). Only api layer has API Gateway URL, all functions in other layers do not need to be exposed by API url.
Now project grow and we have more developers. I want to prevent issues when somebody uses const accountDb = require('../db/account') in, say, api modules (api must call db layer only through invocation wrapper).
I'd like to split single serverless project to 3 different projects but stuck on local running. I can run they locally on different ports but unable to invoke lambdas in db project from api one. It is clear why.
Question: is it possible to call one lambda in project1 from lambda in project2 while both running locally without exposing API url (I know that I can call it by AJAX).

Absolutely! You'll need to used the aws-sdk in your project to make the lambda-to-lambda call both locally and in AWS. You'll then need to use serverless-offline-lambda-invoke to make the call work offline (note the endpoint configuration option which you'll need to set locally).

Related

connect azure functions to local signalr instance

I am currently writing tests for an existing project based on azure functions. The project uses signalr to send live update messages to the clients.
For my tests I am currently using a signalr instance, that is running in the cloud, but I need to replace it by a "local" instance on the system, that is running the tests, so i can be 100% sure, that the signalr message is coming from my test session.
Does anybody have an idea, how to get a signalr-server running in a docker container for my tests (i need a connection string i can provide for the azure functions app)?
I could not find anything online. I am sure I am not the only one, who wants to test if signalr messages are send correctly and i would prefer not to implement the signalr-server myself.
The bindings available in Azure Functions are for the Azure SignalR Service not SignalR itself, so there is no way unfortunately to test this locally.
You could simply instead just create a test Azure SignalR Service instance and use that instead.
I didn't find any way how to achieve this, so i created a repo with a small mock service for the SignalR service. I hope I am allowed to post such stuff here.
My repo and my docker image
Feel free to use / fork it. I am not sure if I will ever find any time to maintain it.

should I add DB, API and FE in one docker compose?

I have a project FE, BE and a DB.
now all the tutorials I found use the three in one file.
Now should it be the DB in one docker-compose file and the BE and FE in another?
Or should it be one file per project with DB, FE and BE?
[UPDATE]
The stack I'm using is Spring Boot, Postgres and Angular.
Logically your application has two parts. The front-end runs in the browser, and it makes HTTP requests to the back-end. The database is an implementation detail of the back-end and not something you separately need to manage.
So I'd consider two possible Compose layouts:
This is "one application", and there is one docker-compose.yml that manages all of it.
The front- and back-end are managed separately, since they are two separate components with a network API. You have a frontend/docker-compose.yml that manages the front-end, and a backend/docker-compose.yml that manages the back-end and its associated database.
Typical container style is not to have a single shared database. Since it's easy to launch an isolated database in a container, you'd generally have a separate database per application (or per microservice if you're using that style). Of the options you suggest, a separate Compose file only launching a standalone database is the one I'd least consider.
You haven't described your particular tech stack here, but another consideration is that you may not need a container for the front-end. If it's a plain React application, the "classic" development flow of compiling it to static files and publishing them via any convenient HTTP service still works with a Docker-based backend. You get some advantages from this path like Webpack's hashed build files, so you can avoid disrupting running clients when you push out a new build. If you separate the front- and back-ends initially, then you can change the way the front-end is deployed without affecting the back-end at all.

serverless (sls) deploy specific resource

How to deploy a specific AWS resource using serverless framework?
I know it supports deploying specific lambda functions using sls deploy -f <function>. Wondering if there is similar option to target AWS resource?
In my use case, I have an API, ~50 Lambdas, Dynamodb, SQS, Cognito user pools etc. Each time I make a change to Cognito (or anything other than lambda code), I have to run complete sls deploy which takes ~10-15 minutes. Wondering if there is a way to skip complete deploy.
There is no way to skip full deployment, as Serverless Framework uses CloudFormation under the hood and it has to update the whole stack. The only resource that you can update separately are functions, as you mentioned, but it's only intended for development and it does not recognize all properties during an update.

Is all WSO2 API Manager's configuration saved in the database?

Say one implements a WSO2 API Manager Docker instance connecting to a separate database (like MySql) which is not dockerized. Say some API configuration is made within the API Manager (like referencing a Swagger file in a GitHub).
Say someone rebuilds the WSO2 API Manager Docker image (to modify CSS files for example), will the past configuration still be available from the separate database? Or does one have to reconfigure everything in the new Docker instance?
To put it in another way, if one needs to reconfigure everything, is there an easy way to do it? Something automatic?
All the configurations are stored in database. (Some are stored in internal registry, but registry saves data in database at the end)
API artifacts (synapse files) are saved in the file system [1]. You can use API Manager's API import/export tool to migrate API artifacts (and all other related files such as swagger, images, sequences etc.) between one server to another.
[1] <APIM_HOME>/repository/deployment/server/synapse-configs/default/api/

iOS app with Django

So we currently have a website that was created using Django. Now, we would like to create a native iOS app that uses the same backend, so we don't have to re-code the whole thing. From my understanding, there are two alternative routes:
1) Call directly Django URLs, which then calls a function. Within that function, create a HTTPResponse, with encoded JSON data and send that back.
2) Create a REST Service from the Django server with something like Tastypie. However, aside from doing straight-forward GET calls to an object, I don't see how we can call custom functions in our Django Models from TastyPie. Can we even do that?
I find it surprising that there is not a lot of information about consuming a web service from iOS with existing backends like Django or RoR. For example, I know that instagram uses Django, but how do they communicate from iOS to their servers?!
Thanks a lot!
I am currently working on an iOS app for iPhone, with Django / Tastypie in the backend. We do both 1 and 2. The resources are offered REST-style (after auth) via Tastypie, and any custom function calls (for example, creating a new user) are handled by views.py at various REST endpoints, which returns JSON.
When you can you should try to use a common way of doing something instead of reinventing the wheel. Given that, REST is a standard style of software architecture for distributed systems and it works very well when you work with entities/objects.
If you have an API where you interact with entities, it is recommended to use REST interfaces. On python you have Tastypie or the newer Django Rest Framework that does almost all the work. As you propose in 2)
If you have an API where you interact with services, like a login, then you should build an RPC service, basically a function with remote access as you explain on 1).
Normally you will need both ways on a robust application. And YES, it is possible to do that. I agree with #sampson-chen, we are doing the same. We have a REST interface with tastypie, and other methods are done with custom RPC services.
The performance in our case is still good, but mostly depends on the methods you call inside your services, for example, a DB query. You have a lot of ways to improve speed, for example using Celery to queue heavy jobs.
Hope it helps.
REST APIs, while very useful, limit you to GET, POST, PUT, DELETE actions, which are performed upon resources. This can make it difficult to express other action types, such as sending an email. There are a few ways I've found to handle this within django/tastypie:
Issue a PUT/PATCH request on an existing resource, setting a flag that lets your backend know to trigger an action. Detecting if a flag was set can be done inside post_save signal handlers (use django-model-utils FieldTracker to see if a field was changed from False to True); this also helps make sure your application logic works the same outside your REST API (such as changes via the admin site, a celery task, an HTML based view, or the Python shell).
Create a non-ORM Resource (e.g. /api/v1/email/) and override the post_list() method, calling your function there.
As mentioned elsewhere, create a subordinate resource (/api/v1/myresource/send/).

Resources