I was trying to build the asset_management.go chaincode using the Fabric v1.0 codebase , but it fails because getCallerMetadata() and getCallerCert() is not found in stub. Is there a replacement for these functions in v1.0 ?
#cjcroix - you can use GetCreator() function in place of getCallerCert()
I don't think that the caller metadata is relevant anymore with the new messages, but you can use the transient field in the proposal to pass in any extra info needed for authentication/authorization in chaincode and you can access it using the GetTransient() function
We are also eventually thinking about passing the entire proposal request into the chaincode as well in the future. That work was started here
Related
I am trying to use the spring-cloud-dataflow-rest-client v2.6.0 in an application to launch spring cloud tasks. I followed the instructions on this page https://docs.spring.io/spring-cloud-dataflow/docs/current/reference/htmlsingle/#appendix-identity-provider-azure to secure Spring-cloud-dataflow-server using Azure AD. However, I am unable to get the setup that was provided for dataflow-shell to work with the SCDF rest client. I know shell internally uses SCDF-rest-client. So, not sure why it won't work for me.
Which properties should I use if my application which uses SCDF-rest-client is to launch tasks like the shell?
I tried with the following properties but I keep getting an invalid scope error.
-Dspring.cloud.dataflow.client.authentication.client-id=yhas7wqh-2a5d-4795-babb-b6213f896b52
-Dspring.cloud.dataflow.client.authentication.client-secret=asjajd8hhsasajdassakja
-Dspring.cloud.dataflow.client.authentication.oauth2.client-registration-id=Batch-Launcher
-Dspring.cloud.dataflow.client.authentication.token-uri=https://login.microsoftonline.com/d8bb2fd3-e835-4d68-b9db-7402a9bf39f1/oauth2/v2.0/token
-Dspring.cloud.dataflow.client.authentication.scope=api://dataflow-server/dataflow.deploy,api://dataflow-server/dataflow.view,offline_access
-Dspring.cloud.dataflow.client.authentication.oauth2.username=abcddemo#afdemo12.onmicrosoft.com
-Dspring.cloud.dataflow.client.authentication.oauth2.password=abcdPwd
-Dspring.cloud.dataflow.client.authentication.basic.username=abcddemo#afdemo12.onmicrosoft.com
-Dspring.cloud.dataflow.client.authentication.basic.password=abcdPwd
The exception that I get
Caused by: org.springframework.security.oauth2.core.OAuth2AuthorizationException: [invalid_scope] AADSTS70011: The provided request must include a 'scope' input parameter. The provided value for the input parameter 'scope' is not valid. The scope api://dataflow-server/dataflow.deploy offline_access is not valid.
Can someone from SCDF team update the Azure provider docs to include also how one can use SCDF rest client like shell to invoke SCDF API.
I have spent my afternoon getting very excited about the container-native serverless platform 'fn project' - http://fnproject.io/.
I love the idea of the FaaS model but have no intention of locking myself into a particular cloud vendor for most of the lifetime of an app - and several other reasons including the desire to spin up the entire app on a small server anywhere if I choose.
fn project seems great for my needs until I finish perusing the documentation and all the relevant blog posts and suddenly think 'what? Wait....what??? Where are the http operations?'.
I cannot find a single reference anywhere that states if it is even possible to to have http triggers for different http operations (ie POST, PUT, PATCH, DELETE), let alone how I would do it.
I want to build REST api's (or certainly at the very least json-serving http-based RPC apis - if it doesn't have hypermedia links it isn't REST ;) but let's not get into that one in this thread)
Am I missing something here (certainly the correct bit of documentation)??
Can anybody please enlighten me as to how I would do this, or even tell me if I have totally misunderstood what I should use this for?
My excitement has gone soft for now but I'm hoping someone that will change with the right information.
It feels odd that I can't find anyone else complaining about this, so I think that indicates my misunderstanding perhaps.
Other solutions such as OpenFaaS look interesting but I dont wan't to have to learn how to deploy kubernetes and docker swarms if I can avoid it :)
I'm not an expert, but as of now it seems not possible to specify the http method inside the trigger. Check latest trigger spec : as you can see, there is no notion of http method here.
However, handling different HTTP methods can be done inside the function itself.
For example, in Java (with fdk-java v1.0.80), you can use com.fnproject.fn.api.httpgateway.HTTPGatewayContext as the first parameter of the function, as described in the section "Accessing HTTP Information From Functions" of the documentation :
In Fn for Java, when your function is being served by an HTTP trigger (or another compatible HTTP gateway) you can get access to both the incoming request headers for your function by adding a 'com.fnproject.fn.api.httpgateway.HTTPGatewayContext' parameter to your function's parameters.
Using this allows you to :
...
Access the method and request URL for the trigger
...
You can then retrieve the HTTP method by calling getMethod() on the HTTPGatewayContext passed as parameter.
In other languages (with others fdk), it's possible to do the same :
in Go : example calling RequestMethod() on context
in Ruby : class HTTPContext
in Python : class HTTPGatewayContext
in Node : class HTTPGatewayContext
From this different contexts, you'll then be able to get method parameter passed when fn invoke --method=[GET|POST|...] (via fn-http-method header).
The main drawback here is that all HTTP methods should be handled in the same function. Nonetheless, you can structure your code to have only one class per method.
After some further thought it seems fairly clear now what my actual misunderstanding was....
When I have built Serverless framework services in the past (or built and deployed Lambda functions using terraform) I have been deploying to AWS and so have been using AWS's API Gateway offering (their product is actually called API gateway but its important to recognise that API Gateway is a distributed systems / micro-sevices design pattern).
API gateway makes it possible to route specific http request types including the method (GET,POST,PUT,DELETE) to the desired functions.
Platforms such as Fn project and OpenFaaS do not provide an out of the box api gateway solution and it seems we would need to take care of this ourselves.
These above mentioned platforms are about deployment of functions. We find the other bits via our product of choice.
I have been trying to implement DI for Azure Functions where the functions is triggered by ServiceBus (topics/subscriptions in this case):
[Singleton]
[FunctionName("Alert")]
public static async Task Alert([ServiceBusTrigger(Topic.Alert, Subscription.PowerBi, Connection = "servicebusconnectionstring")] Message message, [Inject]IPowerBiService powerBiService, [Inject]IQueueService queueService)
I have read about Azure Functions and DI on following sites:
https://mcguirev10.com/2018/04/03/service-locator-azure-functions-v2.html
https://blog.wille-zone.de/post/azure-functions-proper-dependency-injection/
https://github.com/introtocomputerscience/azure-function-autofac-dependency-injection
All examples works fins using HTTP trigger, I assume the IIS host is up and running and is containing the services. But using ServiceBus trigger, I can't get it to work. I have implemented the solutions mention above, and a few more but all get same issues. The code works, bu the services are created for message/trigger.
Anyone out there that has manage to do this, or arn't it possible to do?
NOTE (update):
I got some more information that I haven’t got time to verify yet, but I have been using a consumption plan for my Azure Functions. It may be the case that you need an App Service Plan instead (using consumption since that price model is more convenient). If anyone know more about this?
I will look into this later this week.
I just want to confirm that it work’s fine now using an App Service Plan instead of an Consumption Plan. The difference is the "cold start" instead of a "warm" host.
I guess all different once of DI implementations should work fine.
I have been using following : https://github.com/MV10/Azure.FunctionsV2.Service.Locator
I am trying to create a Google Assistant for my Raspberry Pi in Kotlin. I implemented a OAuth flow using the so called "device flow" proposed in this IETF draft, since my Raspberry shall later just expose a web interface and does not have any input devices or graphical interfaces.
Google does support this flow (of course) and I obtain a valid access token with user consent in the end. For testing purpose I also tried a default authorization flow that will just forward the user to localhost, as it is normally done but it did not solve the problem.
I tested the access token using this tool and it confirmed validity of scope and token. So the token itself should work.
Scope is: https://www.googleapis.com/auth/assistant-sdk-prototype as documented here
This actually does not point to any valid web resource but is referenced in every documentation.
Then I tried to stream audio data to the assistant SDK endpoint using the gRPC provided java stubs. As took a third party reference implementation as a guide how to authenticate the rpc stub. But neither the reference implementation nor my own one works. They both report
io.grpc.StatusRuntimeException: UNAUTHENTICATED: Request is missing required authentication credential. Expected OAuth 2 access token, login cookie or other valid authentication credential. See https://developers.google.com/identity/sign-in/web/devconsole-project.
The stub is authenticated this way:
embeddedAssistantStub.withCallCredentials(
MoreCallCredentials.from(OAuth2Credentials
.newBuilder()
.setAccessToken(
myAccessToken,
myAccessTokenExpirationDate))
.build()))
and the authenticated request is performed like this:
val observer = authenticatedEmbeddedAssistantStub.converse(myStreamObserverImplementation)
observer.onNext(myConfigConverseRequest)
while(more audio data frames available) {
observer.onNext(myAudioFrameConverseRequest)
}
observer.onCompleted()
(I prefixed pseudo variables with "my" for clarity, they can consist of more code in the actual implementation.)
I even contacted the author of this demo implementation. He told me, last time he checked (several months ago) it was working perfectly fine. So I finally ran out of options.
Since the client implementation I took as reference used to work and I do actually authenticate the stub (although the error message suggests the opposite) Probably, either my valid access token with correct scope is not suitable chosen for the assistant API (though I followed the suggestions of google) or the API servers had a change not properly documented in the getting started articles by google.
So: Did anyone ran in the same problem and know how to fix it? I have the project on github. So if anyone needs the broken source code, I can do a temporary commit that produces the error.
Note, to save some works for mods: This issue referres to this and this question, both unresolved and using different languages but describing a similar problem.
Well, seems I was right about my second assumption: The error is server side. Here is the github issue, let's just wait for the fix.
https://github.com/googlesamples/assistant-sdk-python/issues/138
every time i change the chaincode and do the deploy, it return a new chaincodeID and i have to do init again, but in production environment, we can not do this,we just want to update the chaincode and the history data must be keeped. i seached, https://jira.hyperledger.org/browse/FAB-22 this url tells me now hyperledger not support for chaincode upgrade, so what can i do if i need this now? if i misunderstand it, you can tell me. thanks!
As you found in FAB-22, Fabric v0.5-0.6 has no support for chaincode “upgrade”. The reason for such behavior is how Fabric saves information in the ledger.
When chaincode tries to call PutState method:
PutState(customKey string, value []byte) error
Fabric will automatically add ChaincodeId to the key and save provided “value” using name CHAINCODE_ID + customKey.
As a result each chaincode has an access only to its own variables. After update, chaincode receives new ChaincodeId and new visibility area.
We found several workarounds for how to deal with this limitation.
Custom upgrade feature:
In your chaincode (v1) you can create function “readAllVars” which loads all variables from ledger using “stub.RangeQueryState” method.
When new version(v2) is deployed, you can make cross-chaincode request to (v1) using “InvokeChaincode” and read previous state from “readAllVars”, then save everything in (v2) area of visibility.
DAO layer:
You can create separate chaincode which will be responsible “read/write” operations. All versions will use this DAO as a proxy for all “PutState” and “GetState” requests. With such approach all Chaincode’s versions will work in the same area of visibility. At the same time this DAO layer become responsible for security and should guarantee that no other chaincodes have access to private information.