I developped my blockchain with composer 0.14 without problem. I managed my users (participant) with secrets and everythings was stored in the blockchain.
Now with Composer 0.15, we have to use cards to connect to the blockchain (even if a hidden function exists). I want to respect the current philosophy...
If I well understood, now I have to manage users in my webapp and make a relation between my users and theirs cards for connection to the blockchain to the right particpant. Am I right?
And other question, how to deal with multiple webapp using same particpants, do I have to generate cards on each servers?
Finally, how to make access more secure for cards saved on a server? Because, user cards can be used without theirs assent (hacking, bug, ...)
Web server : Ubuntu/NodeJS - the question can be asked for all plateforms
now I have to manage users in my webapp and make a relation between my users and theirs cards for connection to the blockchain to the right particpant. Am I right?
its more accurate to say - map your identity to the right participant in Composer.
multiple webapp
if you're spinning up separate web apps, they can use the same business network and the identities they authenticate with are mapped to participants in Composer. If an identity (mapped to a participant) uses one of those webapps then it can (say) transact on the business network in question. You could for example set up a persistent shared store accessible by your webapps or deploy a REST server with a persistent store for cards -> https://hyperledger.github.io/composer/integrating/deploying-the-rest-server.html
secure storage of cards
Hardware Security Module (cryptographic hardware-based option for key storage)? Trusted Platform Module ?
Related
some questions about service accounts and best practises on GCP.
1) I'm able to create a "brand new" service account. How can I ensure that this new service account doesn't have any kind of privileges bound to it? I'm asking this because for a project I need to create multiple service accounts with only one permission: write access to a single Google Storage bucket. Nothing more. How can I ensure that this is the only granted permission and nothing else ?
2) Should I create a new Google Cloud Project for every customer I have, in example, one project for each website that I'll host to GCP or a single company project (in this case, my company) would be enough to hold all Compute Instances, Storage buckets and so on, needed by my customers ?
Managing hundreds of project would be overkill, if possible, i prefere to avoid this, without impacting secutiry.
Thank you.
You can only constrain service account permissions to enabled services that implement IAM (the Google Cloud Platform services including Cloud Storage). For services that don't implement IAM, the only way to limit a service account's auth is through OAuth scopes.
Projects appear provide a more robust security perimeter for you to separate your tenant customers. Additionally, you get enforced separation of billing, logging, auditing etc. It's debatable whether managing per customer projects (each owning a bucket and related service accounts) has different security than a multi-customer project (with many buckets and service accounts) since service accounts may be easily extended across projects. I recommend whichever path you choose, you ensure control is effected programmatically to minimize human error of granting one customer's account(s) to another customer's bucket(s).
HTH!
I think I am missing something but I do not know where the code from here
https://hyperledger.github.io/composer/latest/managing/identity-issue.html
should be used.
no problem.
They're intended for use from client applications that want to 'consume' the business network (ie already deployed), ie for creating business network cards containing [issued] blockchain identities (these come from a CA server under the covers when the Composer JS API 'identity issue' is invoked - the CA server issues the actual certificates and that is combined with business network metadata to create a business network card)
The Javascript APIs / programmatic examples are when using composer-client - ie issuing identities programmatically from a client application (eg Angular-based user registration module in an app etc etc).
So an example might be where once a user is registered, you can have the 'card' creation process automated, such that they can then import to their wallet and connect to the business network. See https://github.com/hyperledger/composer-sample-networks/blob/master/packages/pii-network/test/pii.js#L134 (FileCardStore (for storing cards on disk) would replace MemoryCardStore in this example, and the ImportCardForIdentity is a function defined further up FYI and 'alice' is the 'admin' card (in this tutorial) that is creating participants / issuing identities and then creating cards / importing them.
Another useful example here in this blog posted on the community -> https://www.skcript.com/svr/how-to-build-nodejs-application-for-your-hyperledger-composer-networks/ and
This tutorial https://medium.com/#CazChurchUk/developing-multi-user-application-using-the-hyperledger-composer-rest-server-b3b88e857ccc shows how you can use the REST APIs to create your identities from a dedicated issuer REST server. And them (just as you would from a client application), authenticate as a REST client and interact in multi-user mode with the business network, using the identities that were issued in the tutorial to call queries etc.
I know about elastic search and run a server in Command prompt in Windows 10 and Work in ASP.NET MVC.
I just want to host in Azure platform. as i have been using shared hosting with SQL server before. so i Need help
What will be minimum requirement or features i have to get to host asp.net mvc application compatible with mobile apps ( providing Apis , not for large scale only for 1 , 2 application ) , with elastic search running at the end ?
Do i have to get virtual machine , documentDB etc features.
You have multiple solutions to your scenario.
Using ElasticSearch
1) To run ElasticSearch you need an Azure Virtual Machine, this could be one from the Marketplace, like, an Ubuntu Server. The size of the VM will depend on the load that it has to manage, maybe you can work with an S1 or you might need an S2. In this case, it's your responsability to expose the network interfases for the elastic search service.
2) For your Web App, you'd need an Azure Web App (App Services). Depending on the load, you can go with an S1/S2 and define your scaling strategy if you need to. There are plenty of tools to measure how your Web App is handling load (NewRelic / AppInsights).
3) Finally, it depends on your Data, but you might need to store it in a persistent storage, like Azure SQL or Azure DocumentDB (depends on the nature of the data) in case you need to rebuild your Elastic Search indexes (and thus reindex from the persistent store).
Using Azure Search
1) Instead of Elastic Search, you can use Azure Search, it will simplify the whole scenario, since it's SaaS (Soft-as-a-Service) and you don't need to maintain and configure a VM, just use the service API from your Web App. Under the hood, it's basically Elastic Search/Lucene with added things.
2) You still need the App Service for your Web App.
3) You still need the persistent storage (Azure SQL, DocumentDB) in case you need to reindex your information or create new indexes.
Basically it all boils down to 3 services (VM/Azure Search + App Service + SQL/DocumentDB) + the Network usage that your App consumes, that's how you'd calculate your costs.
We are currently using both solutions on our products (ElasticSearch for an ELK Logging platform / Azure Search for our main client products) and both work well, but it depends really on your wallet and the kind of implementation times you have, the Azure Search approach might be faster.
A little design question here:
The requirements for the system go as follows:
all the content is stored on the backend in a database
the client(mobile app) downloads all of the data to have a local copy for offline access
The database on the backend is quite complex and contains entities/fields which are not(at least not currently) used in the mobile client.
What is the right way to design the local database ? Do I need to create a mirror copy of the backend database on the client ? Do I need to create my own data model regardless of what is on the backend ? Should there be any similarities between the two data models at all ?
I currently develop an iOS app for a local business directory, and I use SQLite. This sadly means I must do several hours of data entry when new businesses are added and push the updated DB out, because the desktop site uses the Joomla CMS.
Obviously companies that provide directory services don't have to worry about such things. How do they do it? Core Data accompanied by a screen scraper?
PS. I apologise if this question is inappropriate to be asked on StackOverflow, I didn't know where else to ask.
Generally these companies have a client/server architecture where the data lives on a centralised server and the mobile apps pull the data through an exposed API over the internet.
To replicate this yourself, you would have a server with all the data and expose it through an API/web service (so you'd need to think about authentication and security) which your mobile app pulls from when it needs to update the database or just have the query sent to the web service and return the appropriate results so the database does not live on the iOS device itself. The downside to the first approach (updating the DB) is you'd need to wait for the DB to fully update before the user could use the application and the downside to the second approach is to make queries, the client would need an active internet connection.
The first thing you'd want to look at is if/how you can expose the data stored in the Joomla CMS through an API (XML/JSON?)