Is all WSO2 API Manager's configuration saved in the database? - docker

Say one implements a WSO2 API Manager Docker instance connecting to a separate database (like MySql) which is not dockerized. Say some API configuration is made within the API Manager (like referencing a Swagger file in a GitHub).
Say someone rebuilds the WSO2 API Manager Docker image (to modify CSS files for example), will the past configuration still be available from the separate database? Or does one have to reconfigure everything in the new Docker instance?
To put it in another way, if one needs to reconfigure everything, is there an easy way to do it? Something automatic?

All the configurations are stored in database. (Some are stored in internal registry, but registry saves data in database at the end)
API artifacts (synapse files) are saved in the file system [1]. You can use API Manager's API import/export tool to migrate API artifacts (and all other related files such as swagger, images, sequences etc.) between one server to another.
[1] <APIM_HOME>/repository/deployment/server/synapse-configs/default/api/

Related

should I add DB, API and FE in one docker compose?

I have a project FE, BE and a DB.
now all the tutorials I found use the three in one file.
Now should it be the DB in one docker-compose file and the BE and FE in another?
Or should it be one file per project with DB, FE and BE?
[UPDATE]
The stack I'm using is Spring Boot, Postgres and Angular.
Logically your application has two parts. The front-end runs in the browser, and it makes HTTP requests to the back-end. The database is an implementation detail of the back-end and not something you separately need to manage.
So I'd consider two possible Compose layouts:
This is "one application", and there is one docker-compose.yml that manages all of it.
The front- and back-end are managed separately, since they are two separate components with a network API. You have a frontend/docker-compose.yml that manages the front-end, and a backend/docker-compose.yml that manages the back-end and its associated database.
Typical container style is not to have a single shared database. Since it's easy to launch an isolated database in a container, you'd generally have a separate database per application (or per microservice if you're using that style). Of the options you suggest, a separate Compose file only launching a standalone database is the one I'd least consider.
You haven't described your particular tech stack here, but another consideration is that you may not need a container for the front-end. If it's a plain React application, the "classic" development flow of compiling it to static files and publishing them via any convenient HTTP service still works with a Docker-based backend. You get some advantages from this path like Webpack's hashed build files, so you can avoid disrupting running clients when you push out a new build. If you separate the front- and back-ends initially, then you can change the way the front-end is deployed without affecting the back-end at all.

Sinchronize blob files from cloud to IoT Edge Blob (local)

Please refer to the following hypothetical diagram for an IoT Edge device implementation. We want to know if there is an automated mechanism for it using the Azure IoT infrastructure.
An admin application will write several JSON configurations files associated with a specific device. Each device has a different config, and the config files are large (1Mb), so using twins is not a good solution.
We want those files stored in the cloud to be sent automatically to the target device, for it to store them in its local blob storage. The local files shall always reflect what is in the cloud, almost like a OneDrive.
Is there any facility for this in Azure/Edge? How we can isolate the information for each device without exposing other configurations stored in the cloud blob?
Upload the BLOB to Azure storage (or anywhere, really), and set a properties.desired property containing the link+SAS token (or if you want: a hash of the contents if you want to keep the URL always the same). Your edge module will get a callback (during startup, and during runtime) that the property value has changed, and can connect to the cloud do download the configuration. No need to use LocalBlobStorage module, the config can be cached in the edge modules /tmp directory.

Serverless interlambda local communication

I have a serverless project with 3 "layers" - api, services and db. Each layer is just a set of function deployed individually (I have setting package.individually === true in .serverless.yml). All layers able to communicate using invocation mechanism from the top (api) to the bottom (db). Only api layer has API Gateway URL, all functions in other layers do not need to be exposed by API url.
Now project grow and we have more developers. I want to prevent issues when somebody uses const accountDb = require('../db/account') in, say, api modules (api must call db layer only through invocation wrapper).
I'd like to split single serverless project to 3 different projects but stuck on local running. I can run they locally on different ports but unable to invoke lambdas in db project from api one. It is clear why.
Question: is it possible to call one lambda in project1 from lambda in project2 while both running locally without exposing API url (I know that I can call it by AJAX).
Absolutely! You'll need to used the aws-sdk in your project to make the lambda-to-lambda call both locally and in AWS. You'll then need to use serverless-offline-lambda-invoke to make the call work offline (note the endpoint configuration option which you'll need to set locally).

How to provide saas customer with server snapshot for business continuity concerns

I'm proposing a SaaS solution to a prospective client to avoid the need for local installation and upgrades. The client uploads their input data as needed and downloads the outputs, so data backup and maintenance is not an issue, but continuity of the online software service is a concern for them.
Code escrow would appear to be overkill here and probably of little value. I was wondering is there an option along the lines of providing a snapshot image of a cloud server that includes a working version of the app, and for that to be in the client's possession for use in an emergency where they can no longer access the software.
This would need to be as close to a point and click solution as possible - say a one page document with a few steps that a non web savvy IT person can follow - for starting up the backup server image and being able to use the app. If I were to create a private AWS EBS snapshot / AMI that includes a working version of the application, and they created an AWS account for themselves, might they be able to kick that off easily enough?
Update:the app is on heroku at the moment so hopefully it'd be pretty straightforward to get it running in amazon EC2.
Host their app at any major PAAS providers, such as EngineYard or Heroku. Check their code into a private Github repository that you can assign them as the owner. That way they have access to the source code and can create a new instance quickly using the repository as the source.
I don't see the need to create an entire service mirror for a Rails app, unless there are specific configuration needs that can't be contained in the project or handled through capistrano.

how to upload files from a FTP location into Marklogic

i need to upload files from a FTP location into marklogic. please guide me on this
MarkLogic doesn't allow accessing external FTP locations from XQuery, like it allows HTTP calls. Nor does it provide FTP servers, like it provides WebDAV servers.
You can however easily put a mediator in between that accesses the FTP instead, and use other means to upload the document into MarkLogic. The latter can be done through a WebDAV App Server that you can create using the Admin interface, through the built-in REST api in MarkLogic 6 ( http://docs.marklogic.com/REST ), or through custom code like Corona ( http://developer.marklogic.com/code/corona ).
If you write the mediator in Java, you can also use the Java API ( see Java API tab at http://docs.marklogic.com/ ).
HTH!
We have a app that needs documents from a shared folder that we running an etl on to get into marklogic. You can do this a number of ways. If you are able to I'd amount the drive on the marklogic box and then read from there. IF that doesnt work see if you can make those files viewable from an http-get requested. IF that doest work then you might want to make a web services.
I personally would avoid WebDav unless you absolutely need it.
Is this a one-off, batch , or continous job ?
If one-off or batch then I would suggest using a script to FTP the files to a local disk then using mlcp or RecordLoader or xmlsh to push them to MarkLogic.
If this is a continuous job then a custom Java app is probably the way to go.
Do realize that FTP is a horribly sensitive protocol .. it can fail in so many ways and takes special port openings etc. It was designed in the 80's before firewalls, NAT and such.
Getting FTP to work reliably irregardless of MarkLogic is a black magic art in itself.
If its possible to use another protocol then FTP that would be ideal.
Say scp or rsync or http.

Resources