After updating chaincode,i lost the init data before - hyperledger

every time i change the chaincode and do the deploy, it return a new chaincodeID and i have to do init again, but in production environment, we can not do this,we just want to update the chaincode and the history data must be keeped. i seached, https://jira.hyperledger.org/browse/FAB-22 this url tells me now hyperledger not support for chaincode upgrade, so what can i do if i need this now? if i misunderstand it, you can tell me. thanks!

As you found in FAB-22, Fabric v0.5-0.6 has no support for chaincode “upgrade”. The reason for such behavior is how Fabric saves information in the ledger.
When chaincode tries to call PutState method:
PutState(customKey string, value []byte) error
Fabric will automatically add ChaincodeId to the key and save provided “value” using name CHAINCODE_ID + customKey.
As a result each chaincode has an access only to its own variables. After update, chaincode receives new ChaincodeId and new visibility area.
We found several workarounds for how to deal with this limitation.
Custom upgrade feature:
In your chaincode (v1) you can create function “readAllVars” which loads all variables from ledger using “stub.RangeQueryState” method.
When new version(v2) is deployed, you can make cross-chaincode request to (v1) using “InvokeChaincode” and read previous state from “readAllVars”, then save everything in (v2) area of visibility.
DAO layer:
You can create separate chaincode which will be responsible “read/write” operations. All versions will use this DAO as a proxy for all “PutState” and “GetState” requests. With such approach all Chaincode’s versions will work in the same area of visibility. At the same time this DAO layer become responsible for security and should guarantee that no other chaincodes have access to private information.

Related

spring-data-flow task example

I'm using spring-cloud-dataflow with taskcloud module but I've some trouble to lunch a simple example in container.
tiny example 6.3 writing code then I've deploy it
but when I try to execute it throw me an
java.lang.IllegalArgumentException: Invalid TaskExecution, ID 1 not found
at org.springframework.util.Assert.notNull(Assert.java:134)
at org.springframework.cloud.task.listener.TaskLifecycleListener.doTaskStart(TaskLifecycleListener.java:200)
In my evaluation I've used Spring boot example
and for run in scd I've add #EnableTask and configured ad sqlserver datasource but it doesn't works.
I'm insisting on using spring cloud data flow cause I've read that spring batch admin is at end-of-life, but 2.0.0.BUILD-SNAPSHOT works
well and a tiny examples works as opposed to what happens in spring cloud data flow with #task annotation.
Probably is my misundestand but could you please provide me a tiny example where or address me some url ?
Referencing https://docs.spring.io/spring-cloud-dataflow/docs/current-SNAPSHOT/reference/htmlsingle/#configuration-rdbms, datasource arguments has to be passed to the data flow server and data flow shell(if using) in-order for the cloud data flow to persist the execution/task/step related data in the required datasource.
Ex from the link for a MySQL datasource(similar can be configured for SQL Server):
java -jar spring-cloud-dataflow-server-local/target/spring-cloud-dataflow-server-local-1.0.0.BUILD-SNAPSHOT.jar \
--spring.datasource.url=jdbc:mysql:<db-info> \
--spring.datasource.username=<user> \
--spring.datasource.password=<password> \
--spring.datasource.driver-class-name=org.mariadb.jdbc.Driver
This error:
Invalid TaskExecution, ID 1 not found
Can be about the SCDF's datasource, in general, SCDF cannot find the Task Execution table in its own database, not application database
You might fix it by adding database driver or fixing url connection string, point to SCDF's database
This issue below might help
How to properly compile/package a Task for Spring Cloud Data Flow

getCallerMetadata() and getCallerCert() not found in Hyperledger fabric v1.0

I was trying to build the asset_management.go chaincode using the Fabric v1.0 codebase , but it fails because getCallerMetadata() and getCallerCert() is not found in stub. Is there a replacement for these functions in v1.0 ?
#cjcroix - you can use GetCreator() function in place of getCallerCert()
I don't think that the caller metadata is relevant anymore with the new messages, but you can use the transient field in the proposal to pass in any extra info needed for authentication/authorization in chaincode and you can access it using the GetTransient() function
We are also eventually thinking about passing the entire proposal request into the chaincode as well in the future. That work was started here

How can I determine the "reason" behind the status of an EC2 instance that has just been registered on Amazon ELB?

I'm working on a deployment script (more specifically, an Ansible module) that registers an EC2 instance with an Amazon ELB. The script uses the Boto library.
Here's a look at the relevant part of the script:
def register(self, wait):
"""Register the instance for all ELBs and wait for the ELB
to report the instance in-service"""
for lb in self.lbs:
lb.register_instances([self.instance_id])
if wait:
self._await_elb_instance_state(lb, 'InService')
def _await_elb_instance_state(self, lb, awaited_state):
"""Wait for an ELB to change state
lb: load balancer
awaited_state : state to poll for (string)"""
while True:
state = lb.get_instance_health([self.instance_id])[0].state
if state == awaited_state:
break
else:
time.sleep(1)
(BTW the code above is from Ansible's ec2_elb module.)
So, when the instance is first registered it is 'OutOfService'. The script here "waits" for the instance to reach the state 'InService' after it has passed health checks etc.
So here's the problem: The process above is overly simplistic (which is why I'm trying to customize the module for my own purposes). The main problem I've hit is that if the load balancer is not configured to service the availability zone that the instance resides in, then the instance will remain out of service. Essentially the script above will just hang.
What I'd like to do (and that's why I'm customizing this built in module) is find a way to determine if the ELB is just waiting around for the instance to pass the healthcheck OR if there is some other reason (like an unregistered availability zone) that's causing it to remain Out of Service.
The Boto library (via the Amazon ELB API) does provide slightly more detail than state: it has a "reason" attribute which is described in the Boto docs (and also the Amazon ELB API docs) as follows:
reason_code (str) – Provides information about the cause of an
OutOfService instance. Specifically, it indicates whether the cause is
Elastic Load Balancing or the instance behind the LoadBalancer.
There is a paucity of documentation on the reason_code attribute that I could find out there, so I'm not sure a) what I can expect the possible return value to even be here, and b) what they actually mean in relation to my question above.
I think what I want to do is doable given that Amazon is able to display a detailed reason for why an instance is out of service is the management console -- and from what I understand they are dogfooding their API there.
So how/where can I find the more detailed reason behind the instance's status?
Ah, it's the description field of InstanceState:
description (str) – A description of the instance.
I guess that was so vague that my brain ignored it.
It also looks like the possibilities of state are two string values:
'ELB'
'Instance'
That's just from playing around with the API; it's not definitive or anything.

Spring.NET, Quartz & Transactions

I've just run into a problem with a Quartz job that I'm invoking through Spring. My ExecuteInternal method had a [Transaction] attribute (because it does a load of DB calls) but when it runs, I get the 'No NHibernate session bound to thread' error.
Just wondering if that's because Spring.NET doesn't support the [Transaction] attribute in Quartz objects?
If not, that's fine... I can start a transaction manually, but wanted to check that it was the case, and not a silly error in my config somewhere.
[Update]
I figured it out actually. In the API docs, it says the preferable way to do this is use transactions on the service layer. My job was using DAOs to do its work, but my transactions are on my service layer, so I just called service methods from my job instead to do the same work (saving, updating records etc) since they already existed.
It also suggests that if you give the SchedulerFactoryObject a DbProvider, you can use transactions in the job itself, but when I did that it seemed to want to find my triggers configured in a special table in the DB (which I haven't set up since my triggers are all in XML) but that's possibly another way to do it.
Calling service methods works fine for me though.
The transaction attribute works using aop. Spring.NET creates an aop proxy for the decorated object. This proxy creates the session and starts the transaction.
In the ExecuteInternal method, you don't call the method on a proxy, but on the target itself. Therefore spring cannot intercept the call and do its transaction magic.
Your services are injected and therefore the transaction attribute works for them.
There's a good explanation in the spring docs on this subject: http://www.springframework.net/doc-latest/reference/html/transaction.html#tx-understandingimpl

How can I update a DataSnap server while clients are still connected?

We use stateful DataSnap servers for some business logic tasks and also to provide clientdataset data.
If we have to update the server to modify a business rule, we copy the new version into a new empty folder and register it (depending on the Delphi version, just by launching or by running the TRegSvr utility).
We can do this even while the old server instance is running. However, after registering the new version, all new client connections will still use the currently running (old) server instance. All clients have to disconnect first, then the new server will be used for the next clients.
Is there a way to direct all new client connections to the new server, immediately after registering?
(I know that new or changed method signatures will also require a change and restart of the clients but this question is about internal modifications which do not affect the interface)
We are using Socket connections, and all clients share the same server application (only one application window is open). In the early days we have used a different configuration of the remote datamodule which resulted in one app window per client. Maybe this could be a solution? (because every new client will launch the currently registered executable)
Update: does Delphi XE offer some support for 'hot deployment' (of updated servers)? We use Delphi 2009 at the moment but would upgrade to XE if it offers easier implementation of 'hot deployment'.
you could separate your appserver into 2 new servers, one being a simple proxy object redirecting all methods (and optionally containing state info if any) to the second one actually implementing your business logic. you also need to implement "silent reconnect" feature within your proxy server in order not to disturb connected clients if you decide to replace business appserver any time you want. never did such design myself before but hope the idea is clear
Have you tried renaming the current server and placing the new in the same location with the correct name (versus changing the registry location). I have done this for COM libraries before with success. I am not sure if it would apply to remote launch rules through as it may look for an existing instance to attach to instead of a completely fresh server.
It may be a bit hackish but you would have the client call a method on the server indicating that a newer version is available. This would allow it to perform any necessary cleanup so it doesn't end up talking to both the existing server instance and new server instance at the same time.
There is probably not a simple answer to this question, and I suspect that you will have to modify the client. The simplest solution I can think of is to have a flag (a property or an out parameter on some commonly called method) on the server that the client checks periodically that tells the client to disconnect and reconnect (called something like ImBeingRetired).
It's also possible to write callbacks under certain circumstances for datasnap (although I've never done this). This would allow the server to inform the client that it should restart or reconnect.
The last option I can think of (that hasn't already been mentioned) would be to make the client/server stateless, so that every time the client wants something it connects, gets what it wants then disconnects.
Unfortunately none of these options are the answer you want to your question, but might give you some ideas.
(optional) set up vmware vSphere, ESX, or find a hosting service that already has one.
Store the session variables in db.
Prepare 2 web boxes with 2 distinct IP address and deploy your stuff.
Set up DNS, firewall, load balancer, or BSD vm so name "example.com" resolves to web box 1.
Deploy new version to web box 2.
Switch over to web box 2 using whatever routing method you chose.
Deploy new version to web box 1 if things look ok.
Using DNS is probably easiest, but it takes time for the mapping to propagate to the client (if the client is outside your LAN) and also two clients may see different results. Some firewalls have IP address mapping feature that you can map public IP address and internal IP address. The ideal way is to use load balancer and configure it to 50:50 and change it to 100:0 when you want to do upgrade, but it costs money. A cheaper alternative is to run software load balancer on BSD vm, but it probably requires some work.
Edit: What I meant to say is session variables, not session. You said the server is stateful. If it contains some business logic that uses session variable, it needs to get stored externally to be preserved across reconnection during switch over. Actual DataSnap session will be lost, so when you shutdown web box 1 during upgrade, the client will get "Session {some-uuid} is not found" error by web box 1, and it will reconnect to web box 2.
Also you could use 3 IP addresses (1 public and 2 private) so the client always sees 1 address , which is better method.
I have done something similar by having a specific table which held my "data version". Each time I would update the server or change a system wide global setting, I would increment this field. When a client starts it always checks this value, and will check again before any transactions/queries. If the value was ever different from when I first started, then I needed to go through my re-initialization logic, which could easily include a re-login to an updated server.
I was using IIS to publish my app servers, so the data that would change would be the path to the app server. I kept the old ones available, to respond to any existing transactions that were in play. Eventually these would be removed once I knew there were no more client connections to that version.
You could easily handle knowing what versions to keep around if you log what server the client last connected too (and therefore would know about).
For newer versions (Delphi 2010 and up), there is an interesting solution
for systems using the HTTP transport:
Implementing Failover and Load Balancing in DataSnap 2010 by Andreano Lanusse
and a related question for the TCP/IP transport:
How to direct DataSnap client connections to various DS Servers?

Resources