I have deployed my business network using composer-rest-server and am able to call the API using postman.
For now I have hard coded ID's for participants/assets, so I cannot make another POST call as objects with the ID already exist.
Where can I delete the existing participants/assets? On composer playground there was a delete button on the testing page that provided this functionality.
If you are using POSTMAN, you can use the DELETE request to remove your test data.
Using the Trade sample from the Composer tutorials, you would use the following curl command to remove the Commodity COAL:
curl -X DELETE --header 'Accept: application/json' 'http://localhost:3000/api/Commodity/COAL'
If you want to remove all the data in you Business Network, you could investigate the composer network reset command. (described here in the Composer docs.)
using javascript composer-client you can do the following:
const { BusinessNetworkConnection } = require('composer-client');
const participantRegistry = await BusinessNetworkConnection.getParticipantRegistry(registry name space, type);
await participantRegistry.remove(registry name space#participantId);
const assetRegistry = await BusinessNetworkConnection.getAssetRegistry(registry name space);
await assetRegistry.remove(registry name space#asset id);
or even you can perform more
await assetRegistry.removeAll(registry name space);
however before deleting participant from registry you should revoke the bounded identity to it by doing the following:
const IdentityRevoke = require('composer-cli').Identity.Revoke;
let options = {
card: 'admin#tutorial-network',
identityId: 'f1c5b9fe136d7f2d31b927e0dcb745499aa039b201f83fe34e243f36e1984862'
};
IdentityRevoke.handler(options);
you can find more information in the documentation about revoking identity in the following link
Since you have deployed the API, use its Swagger interface to look at what you can do. It is a RESTful api which means each endpoint accepts the HTTP verbs that make sense for it.
Each asset and participant endpoint, for example, accepts DELETE requests where all you have to do is pass the ID of the entity you want to delete.
You can issue POST requests to create new data and PUT requests to update the data.
One thing to be aware of is that each request will create a new block on the ledger. DELETE doesn't mean the previous block disappears, it simply means it's in a deleted state and your block count keeps increasing for each transaction you issue.
If you want to run tests and make sure your assets get created properly then you can start using the feature files, there is a sample feature to get you started. It uses a specific composer cucumber package, you can see that if you look inside the package.json. This would be the preferred method to run tests, as this isn't a normal environment where you create test data and then delete it, you have to be careful as your block count will keep increasing.
Related
In my example architecture; I have an IN-Mobius and a ADN-AE-Thyme (nCube Thyme).
First of all; i created a AE which is called "ae_test_02", i can GET this resource via Postman.
After this step; i run ADN-AE-Thyme, thyme.js, and it created a container which is called "thyme_01", and also i can GET this resource via Postman.
Also in that step, thyme.js add containerInstances into the "thyme_01" container. Then, i can get that latest containerInstance with "/la" parameter via Postman
In this point, the problem has began. I create a group resource, while creating i tried couple solutions, always fail. I tried in "mid" attribute;
{ "m2m:grp": {
"rn": "grp_test_100520_08",
"mt": 3,
"mid": ["3-20200505012920476/la",
"Mobius/3-20200505012920476/la",
"Mobius/thyme_01/la",
"Mobius/ae_test_02/3-20200505012920476/la",
"Mobius/ae_test_02/thyme_01/la",
"ae_test_02/thyme_01/la",
"ae_test_02/3-20200505012920476/la"],
"mnm": 10
}
The problem is that, i tried these mid paths one by one, but never works. When i try to get latest containerInstances via Postman, i use this URL and the results is "resource does not exist (get_target_url)"
The containers and contentInstances in the IN-Mobius, and i requested to the IN-Mobius. By using these informations how should i implement group "mid" attribute; for the get containerInstances via group resources ?
First Edit.
Hi Andreas.
For the first issue, i can get resource correctly. In this point my aim is GET containerInstance in the container, which is a member (mid) in that .
Second; now I understand, there is not existing resource in resource, okay. As you mentined, i want to pass a request to all member (containers) of a resource. For this purpose, i will use https://localhost:7579/Mobius/grp_test_100520_08/fopt, but it gives an error "ERR_INVALID_ARG_TYPE". I know that, at least one mid structure is correct, but which one is the correct ?
For the smaller issue, i already know that resouce multiple times in the mid attribute, because i did not know which one is the correct adressing scheme ?
Also, while creating a resource, the resource should be in the ae resource (/Mobius/ae_test_02/grp_name) or in the Mobius (/Mobius/grp_name)
resources can be in directly in IN-Mobius or should in MN-Rosemary? Is fanOutPoint only using by external resource like MN or even IN, fopt using ?
Second Edit.
The "thyme" comes from nCube Thyme (https://github.com/IoTKETI/nCube-Thyme-Nodejs), it creates a container and then randomly create ContainerInstances.
The resource tree looks like;
Mobius >> ae_test_02 (AE resource) >> thyme_01 (Container It created from nCube Thyme https://github.com/IoTKETI/nCube-Thyme-Nodejs) >> ContainerInstances
I have also a resource in >> Mobius >> grp_test_100520_08 (GROUP resource which is uses)
I tried;
{ "m2m:grp": {
"mid": ["Mobius/ae_test_02/thyme_01"],
"mnm": 5
}
}
In this request, fopt.js gives an error "callback is not a function".
{ "m2m:grp": {
"mid": ["ae_test_02/thyme_01"],
"mnm": 5
}
}
In this request, fopt.js gives same "callback is not a function", but in different line.
I guess my fopt.js file is old, then i checked mobius github page and get that file, however it not solve this.
Also my resource look like this;
Also my fopt.js file is same as this;
https://github.com/IoTKETI/Mobius/blob/master/mobius/fopt.js
UPDATE 3.
The "cnm" attribute problem is this; while creating a resouce, CSE will automaticly assign "cnm" attribute according to member size. However, CSE will not this process in UPDATE (PUT) request. From this point, i will create resources, not UPDATE them.
As you mentioned, i send requests to the group's resource, but it gives the "callback is not a function" error. To solve this problem, i downloaded and installed the whole distribution. (https://github.com/IoTKETI/Mobius) After that, i will do same processes again for understand the fopt.js file behaviour. The result wasn't changed, it gives the same error.
I planning to explain whole situation and create an issue, in Mobius github page. I hope they will response soon.
I think there are two issues with your example.
The first issue is with the request to the <Group>. You need to distinguish between requests to the <Group> resource itself and requests to the members of the <Grou>.
There is no child resource <la> of the <Group> resource itself. This is why you receive an error message. If you want to pass a request to all members of a <Group> resource then you need to target the virtual child resource <fopt>. In your case the request should target URI https://localhost:7579/Mobius/grp_text_100520/fopt. Since you already have the <la> resources as members you won't need to add the /la part to the request. However, I would recommend to only add the <Container> resources to the group and use the target URI https://localhost:7579/Mobius/grp_text_100520/fopt/la to retrieve the latest <ContentInstances> of each container.
The second (smaller) issue is that from what I can get from your example code that you add the same resource multiple times to the group, but only with different addressing schemes. Please be aware that the CSE must removes duplicate resources when creating or updating the mid attribute.
Edit after question update
It is not very clear what your resource tree looks like. So, perhaps you should start with only one resource references and continue from there. Valid ID's in the mid attribute are either structured (basically the path of the rn attributes) or unstructured ID's (the ri's). The CSE should filter the incorrect ID, so you should get the correct set of ID's in the result body of the CREATE request.
btw, where does "thyme" come from? This is only in a label, which does not form an ID.
Regarding the <fanOutPoint> resource: Normally all request would be targeted to the <Group> resource, but requests to the virtual <fanOuPoint> resource are forwarded to al the members of the group. If a resource referenced in mid is accessible then the request is forwarded and the result is collected and is part of the result body of the original request.
You also need to be careful and regard the resource types: only send valid requests to the group's members.
Update 2
From the IDs in the mid attribute of the <Group> resource it looks like that the CSE validated the targets (though the cnm (current number of members) is obviously wrong, which seems to be an error of the CSE).
So, you should be able to send requests to the group's <fopt> resource as discussed above.
For the CSE runtime error you should perhaps contact the Mobius developers. But my guess is that you perhaps should download and install the whole distribution, not only a single file.
for anyone in the future; who is dealing with this problem.
The problem is simply is that; in the app.js there is 4 function call (fopt.check). While calling the function in the app.js file, there are 5 parameter exists, on the other hand, while getting these arguments in the function it takes only 4 arguments. For this reason, body_obj always becomes "undefined" then it will never reach the "Container" or "ContainerInstance" source. Lately, KETI was sent a new commit to the Mobius Github page (https://github.com/IoTKETI/Mobius/commit/950182b725d5ffc0552119c5c705062958db285f) to solve this problem. It solves this problem unless you are using use_secure == 'disable'. If you try to use use_secure == 'enable' you should add an if statement to check use_secure and add import HTTPS module.
Also, while creating resource, defining the "mid" attribute is not very clear. Just for now, if you want to reach (latest) source; you should add "/la" for all members of the group. This is recommended by KETI on Github page issue 5.
(https://github.com/IoTKETI/Mobius/issues/5#issuecomment-625076540)
And lastly, thank you Andreas Kraft; your help was very useful.
First of all please share if there is any MSGraph SDK official documentation anywhere that I can use for reference.
I have a scenario, where I want to query all manager and member links from AAD without providing the user and group objectID respectively. This is currently supported in DQ channel, i.e. I can do something like this using MsGraphSDK:
MsGraphClient.Users.Delta().Request().Select("manager")
OR
MsGraphClient.Groups.Delta().Request().Select("members")
I don't want to use DQ for initial-sync due to performance problems, and other issues.
My fallback option is to query through Graph directly, so I want to do something like the following, but this doesn't return any result:
MsGraphClient.Users.Request().Select("manager")
OR
MsGraphClient.Groups.Request().Select("members")
It looks like this isn't even supported currently at the lower (AADGraph) layer. Please correct me if I am wrong, and provide a solution if any!
So my fallback approach is to pull all the user and group aadObjectIds, and explicitly query the manager and member links respectively.
In my case, there can potentially be 500K User-Objects in AAD, and I want to avoid making 500K separate GetManager calls to AAD. Instead, I want to batch the Graph requests as much as possible.
I wasn't able to find much help from the Internet on sending Batch requests through SDK.
Here's what I am doing:
I have this BatchRequestContent:
var batchRequestContent = new BatchRequestContent();
foreach (string aadObjectId in aadObjectIds)
{
batchRequestContent.AddBatchRequestStep(new BatchRequestStep(aadObjectId, Client.Users[aadObjectId].Manager.Request().GetHttpRequestMessage()));
}
and I am trying to send a BatchRequest through GraphSDK with this content to get a BatchResponse. Is this currently supported in SDK? If yes, then what's the procedure? Any documentation or example? How to read the batch-response back? Finally, is there any limit for the # of requests in a batch?
Thanks,
Here is a related post: $expand=manager does not expand manager
$expand is currently not supported on the manager and directReports relationships in the v1.0 endpoint. It is support in the beta endpoint but
the API returns way to much throw away information: https://graph.microsoft.com/beta/users?$expand=manager
The client library partially supports Batch at this time although we have a couple of pull requests to provide better support
with the next release (PR 1 and 2).
To use batch with the current library and your authenticated client, you'll do something like this:
var authProv = MsGraphClient.AuthenticationProvider;
var httpClient = GraphClientFactory.Create(authProv);
// Send batch request with BatchRequestContent.
HttpResponseMessage response = await httpClient.PostAsync("https://graph.microsoft.com/v1.0/$batch", batchRequestContent);
// Handle http responses using BatchResponseContent.
BatchResponseContent batchResponseContent = new BatchResponseContent(response);
I have created two users, and if i create a bucket for one user and an object inside that bucket i can share it using the HTTP api at the moment.. see here
https://simperium.com/docs/reference/http/#objectshare
However, even when i sent through "write_access" = true and get a 200 as a result, it doesnt seem to let me write to it.
Its only if i enable sharing back the other way that it allows data to sync both ways, am i doing something wrong?
Has collaboration got any further yet? i can see there is a long but no docs as yet? anyone know?
After some more trial & error, I found the solution:
to edit the shared object, the target user (ie the user that the object was shared with) needs to use an objectId that is equal to: <original_user_simperiumId>/<original_objectId> to edit the object.
If you just use <original_objectId> it won't work.
So the full command for editing a shared object, using curl:
curl -H 'X-Simperium-Token: {auth_token_of_target_user}' https://api.simperium.com/1/{appID}/{entity}/i/{original_user_simperiumId>/{original_objectId} -d '{"data_key" : "new_data_value"}'
I want to retrieve all the messages that were sent in my teams slack domain. Although, I'd prefer that the data be received in XML or JSON I am able to handle the data in just about any form.
How can I retrieve all these messages? Is it possible? If not, can I retrieve all the messages for a specific channel?
If you need to do this dynamically via API you can use the channels.list method to list all of the channels in your team and channels.history method to retrieve the history of each channel. Note that this will not include DMs or private groups.
If you need to do this as a one time thing, go to https://my.slack.com/services/export to export your team's message archives as series of JSON files
This Python script exports everything to JSON by a simple run:
https://gist.github.com/Chandler/fb7a070f52883849de35
It creates the directories for you and you have the option to exclude direct messages or channels.
All you need to install is the slacker module, which is simply pip install slacker. Then run it with --token='secret-token'. You need a legacy token, which is available here at the moment.
For anyone looking for Direct Message history downloads, this node based cli tool allows you to download messages from DMs and IMs in both JSON and CSV. I've used it, and it works very well.
With the new Conversations API this task is bit easier now. Here is a full overview:
Fetching messages from a channel
The new API method conversations.history will allow you to download messages from every type of conversation / channel (public, private, DM, Group DM) as long as your token has access to it.
This method also supports paging allowing you to download large amounts of messages.
Resolving IDs to names
Note that this method will return messages in a raw JSON format with IDs only, so you will need to call additional API method to resolve those IDs into plain text:
user ID: users.list
channel IDs: conversations.list
bot IDs: bots.info (there is no official bots.list method, but there is an unofficial one, which might help in some cases)
Fetching threads
In addition use conversations.replies to download threads in a conversation. Threads function a bit like conversations within a conversation and need to be downloaded separately.
Check out this page of the official documentation for more details on threading.
If anyone is still looking for a solution in 2021, and of course have no assistance from their workspace admins to export messages then obviously they can do the following.
Step 1: Get the api token from your UI cookie
Clone and install requirements and run SlackPirate
Open slack on a browser and copy the value of the cookie named d
Run python3 SlackPirate.py --cookie '<value of d cookie>'
Step 2: Dump the channel messages
Install slackchannel2pdf (Requires python)
slackchannel2pdf --token 'xoxb-1466...' --write-raw-data T0EKHQHK2/G015H62SR3M
Step 3: Dump the direct messages
Install slack-history-export (Requires node)
slack-history-export -t 'xoxs-1466...' -u '<correct username>' -f 'my_colleagues_chats.json'
I know that this might be late for the OP, but if anyone is looking for a tool capable of doing the full Slack Workspace export, try Slackdump it's free and open source (I'm the author, but anyone can contribute).
To do the workspace export, run it with -export switch:
./slackdump -export my_export.zip
If you need to download attachments as well, use the -f switch (stands for "files"):
./slackdump -f -export my_export.zip
It will open the browser asking you to login. If you need to do it headless, grab a token and cookie, as described in the documentation
It will generate the export file that would be compatible with another nice tool slack-export-viewer.
In order to retrieve all the messages from a particular channel in slack this can be done by using conversations.history method in slack_sdk library in python.
def get_conversation_history(self, channel_id, latest, oldest):
"""Method to fetch the conversation history of particular channel"""
try:
result = client.conversations_history(
channel=channel_id,
inclusive=True,
latest=latest,
oldest=oldest,
limit=100)
all_messages = []
all_messages += result["messages"]
ts_list = [item['ts'] for item in all_messages]
last_ts = ts_list[:-1]
while result['has_more']:
result = client.conversations_history(
channel=channel_id,
cursor=result['response_metadata']['next_cursor'],
latest=last_ts)
all_messages += result["messages"]
return all_messages
except SlackApiError as e:
logger.exception("Error while fetching the conversation history")
Here, i have provided latest and oldest timestamps to cover a time range when we need to collect the messages from the all messages in conversation history.
And the cusor argument is being used to point the next cursor value as this method can only collect 100 messages at one time but it supports pagination through which we can point the next cursor value from result['response_metadata']['next_cursor'].
Hope this will be helpful.
Here is another tool for exporting all messages from a channel.
The tool is called slackchannel2pdf and will export all messages from a public or private channel to a PDF document.
You only need a token with the required scopes and access.
when we create projects via API the newly created project is immediately returned in both the webApp and in the API.
But a tag created using API "https://app.asana.com/api/1.0/tags" is often returned only after two or three GET requests. Also in the webApp it needs a refresh, online application sync does not update new tags like Projects.
This late returns really affects the user interaction. I follow the same workflow thats used for creating and adding project, but tags feels a bit laggy. Am I missing anything?
The answer is that tags which aren't associated with any tasks are - unfortunately - hidden in the app, and consequently also in the API. As you discovered, you can get the ID back from the POST to create and then associate it with a task from there (since there's little purpose in creating a tag if you're not associating it with something that shouldn't typically be a problem, but it is clunky). We are looking at changing our data model for tags to be a bit more intuitive in future, but that's still a ways off, so this is the reality for the foreseeable future.
the newly created tag is missed in the GET /tags API from time to time. But the http response returned after creation of the new tag by POST /tags, will contain the id, name and other properties of the newly created tag. So we can add the new tag from this response.
POST-> https://app.asana.com/api/1.0/tags \
-u "name=fluffy" \
-u "workspace=14916"
# Response
HTTP/1.1 201
{
"data": {
"id": 1771,
"name": "fluffy",
...
}
}