Is it possible to get a list of all streams using the spring cloud dataflow java rest client? - spring-cloud-dataflow

I'm using the spring cloud dataflow java rest client (https://docs.spring.io/spring-cloud-dataflow/docs/current/api/) and want to use it to retrieve all currently deployed streams.
It's easy enough to get a StreamOperations object and get a list of streams from it:
val template = DataFlowTemplate(<someUri>)
val streamOperations = template.streamOperations()
val streamDefinitionResources = streamOperations.list()
The streamDefinitionResources in the above is actually a PagedModel<StreamDefinitionResource>, that holds the first page of results using a page size of 2000.
I don't, however, see any way to iterate through all the pages to get all the streams using the java rest client (i.e. there's no paging support available via the StreamOperations or StreamDefinitionResource classes).
Is it possible to get all the streams using only the java rest client? Am I missing something?

Summary
The PagedModel<StreamDefinitionResource> has a getNextLink() method that you can use to manually traverse the "next" page of results.
Details
The underlying Dataflow REST API supports paging via page number and size request parameters and returns HAL responses that include _links to the next and previous pages.
For example, given 10 stream definitions this HTTP request:
GET localhost:9393/streams/definitions?page=0&size=2
returns the following response:
{
_embedded: {
streamDefinitionResourceList: [
{
name: "ticktock1",
dslText: "time | log",
originalDslText: "time | log",
status: "undeployed",
description: "",
statusDescription: "The app or group is known to the system, but is not currently deployed",
_links: {
self: {
href: "http://localhost:9393/streams/definitions/ticktock1"
}
}
},
{
name: "ticktock2",
dslText: "time | log",
originalDslText: "time | log",
status: "undeployed",
description: "",
statusDescription: "The app or group is known to the system, but is not currently deployed",
_links: {
self: {
href: "http://localhost:9393/streams/definitions/ticktock2"
}
}
}
]
},
_links: {
first: {
href: "http://localhost:9393/streams/definitions?page=0&size=2"
},
self: {
href: "http://localhost:9393/streams/definitions?page=0&size=2"
},
next: {
href: "http://localhost:9393/streams/definitions?page=1&size=2"
},
last: {
href: "http://localhost:9393/streams/definitions?page=4&size=2"
}
},
page: {
size: 2,
totalElements: 10,
totalPages: 5,
number: 0
}
}
The Dataflow Java REST client exposes this HAL response in the PagedModel<StreamDefinitionResource> response which provides a getNextLink() method.
Caveat 1) However, the current implementation (as you pointed out) is hardcoded to page size of 2000. This means you would not see this behavior until you had more than 2000 stream definitions.
Caveat 2) Another point to note is that traversal of the link to the "next" page is not automatically handled and you would need to manually invoke the links URL to retrieve the next page.
Assume the StreamOperations.list accepted a page size parameter the code could look something like this:
int pageSize = 2;
PagedModel<StreamDefinitionResource> pageOfStreamDefs = streamOperations().list(pageSize);
pageOfStreamDefs.getNextLink()
.ifPresent((link) -> someFunctionToInvokeAndProcessNextPage(link.getHref());
More details on the REST API parameters can be found here.

I guess I'm a bit late, but I had the same issue and found a workaround. As onobc said, the PagedModel of any resource has a getNextLink() method that returns a Link with the next page address.
You can use the same RestTemplate from DataflowTemplate to handle these next requests:
PagedModel<StreamDefinitionResource> streamDefPage = dataflowTemplate.streamOperations().list();
// Process page here
if (streamDefPage.getNextLink().isPresent()) {
Link link = streamDefPage.getNextLink().get();
PagedModel<StreamDefinitionResource> streamDefNextPage = dataflowTemplate.getRestTemplate().getForObject(link.getHref(), StreamDefinitionResource.Page.class);
// Process next page here
}
And so on.
Hope this helps!

Related

Custom activity feed notifications sent from daemon(nodejs app), not getting the data assigned to subEntityId in teams mobile client

I'm using activityFeedNotification graph api to send push notification to the users of our teams tab app from backend using nodejs. The notification is sending successfully in both teams desktop and mobile client but we're not getting the data assigned to subEntityId in mobile client(In desktop client and browser we're getting it).
We are encoding the data(object) and assigning it to the subEntityId in teams context in our nodejs application. Then in teams client, we get that data from teams context using microsoft teams sdk and redirect user to the respective page in our application based on whatever data we get in subEntityId
In desktop, deeplinking is working perfectly but in android client, we're not getting any data in subEntityId. It is just opening the homepage of our tab app but I need to redirect user to specific page based whatever data is assigned to subEntityId.
Below I've provided how we're encoding the data and assigning it to subEntityId.
Server Side:
const context = encodeURIComponent(
JSON.stringify({
"subEntityId": {
"type": "PROGRAM_PROFILE",
"program_id": "12345",
uid: uuidv4(),
}
})
);
const body = {
topic: {
source: 'text',
value: notificationTopic,
webUrl: `https://teams.microsoft.com/l/entity/${TEAMS_APP_ID}/index?context=${context}`,
},
activityType: 'commonNotification',
previewText: {
content: notificationSubtitle,
},
templateParameters: [
{
name: 'title',
value: notificationTitle,
},
],
};
const url = `https://graph.microsoft.com/v1.0/users/${userId}/teamwork/sendActivityNotification`;
await axios.post(url, body));
Client Side:
const context = await app.getContext();
console.log(context?.page?.subPageId); // getting undefined
Any kind of help is appreciated!
From the documentation:
{page.subPageId}: The developer-defined unique ID for the subpage this content points defined when generating a deep link for a specific item within the page. (Known as {subEntityId} prior to TeamsJS v.2.0.0).
You are using subEntityId on the backend but accessing subPageId on the client side.

How to retrieve Slack messages via API identified by permalink?

I'm trying to retrieve a list of Slack reminders, which works fine using Slack API's reminders.list method. However, reminders that are set using SlackBot (i.e. by asking Slackbot to remind me of a message) return the respective permalink of that message as text:
{
"ok": true,
"reminders": [
{
"id": "Rm012C299C1E",
"creator": "UV09YANLX",
"text": "https:\/\/team.slack.com\/archives\/DUNB811AM\/p1583441290000300",
"user": "UV09YANLX",
"recurring": false,
"time": 1586789303,
"complete_ts": 0
},
Instead of showing the permalink, I'd naturally like to show the message I wanted to be reminded of. However, I couldn't find any hints in the Slack API docs on how to retrieve a message identified by a permalink. The link is presumably generated by chat.getPermalink, but there seems to be no obvious chat.getMessageByPermalink or so.
I tried to interpet the path elements as channel and timestamp, but the timestamp (transformed from the example above: 1583441290.000300) doesn't seem to really match. At least I don't end up with the message I expected to retrieve when passing this as latest to conversations.history and limiting to 1.
After fiddling a while longer, here's how I finally managed in JS:
async function downloadSlackMsgByPermalink(permalink) {
const pathElements = permalink.substring(8).split('/');
const channel = pathElements[2];
var url;
if (permalink.includes('thread_ts')) {
// Threaded message, use conversations.replies endpoint
var ts = pathElements[3].substring(0, pathElements[3].indexOf('?'));
ts = ts.substring(0, ts.length-6) + '.' + ts.substring(ts.length-6);
var latest = pathElements[3].substring(pathElements[3].indexOf('thread_ts=')+10);
if (latest.indexOf('&') != -1) latest = latest.substring(0, latest.indexOf('&'));
url = `https://slack.com/api/conversations.replies?token=${encodeURIComponent(slackAccessToken)}&channel=${channel}&ts=${ts}&latest=${latest}&inclusive=true&limit=1`;
} else {
// Non-threaded message, use conversations.history endpoint
var latest = pathElements[3].substring(1);
if (latest.indexOf('?') != -1) latest = latest.substring(0, latest.indexOf('?'));
latest = latest.substring(0, latest.length-6) + '.' + latest.substring(latest.length-6);
url = `https://slack.com/api/conversations.history?token=${encodeURIComponent(slackAccessToken)}&channel=${channel}&latest=${latest}&inclusive=true&limit=1`;
}
const response = await fetch(url);
const result = await response.json();
if (result.ok === true) {
return result.messages[0];
}
}
It's not been tested to the latest extend, but first results look alright:
The trick with the conversations.history endpoint was to include the inclusive=true parameter
Messages might be threaded - the separate endpoint conversations.replies is required to fetch those
As the Slack API docs state: ts and thread_ts look like timestamps, but they aren't. Using them a bit like timestamps (i.e. cutting off some characters at the back and inserting a dot) seems to work, gladly, however.
Naturally, the slackAccessToken variable needs to be set beforehand
I'm aware the way to extract & transform the URL components in the code above might not the most elegant solution, but it proves the concept :-)

Passing report level filter to power bi from asp.net mvc application

I have embedded power bi report in my asp.net mvc application. It is working fine and it is showing data. I am having one issue, in my application every client has few branches like cities. Every user for that client may have access to all or some branches and based on that access we show data when that user is logged into system.
I want to achieve same in power bi that when user is logged into system and view reports it should show him/her data of branch he/she has access to. I found one solution
[Can I pass a dynamic query parameter to an embedded Power BI report in ASP.Net MVC? where we can set filter values from program side. I applied this in my view but i am still getting all data.
const branchFilter = {
$schema: "http://powerbi.com/product/schema#basic",
target: {
table: "tblWorkhistory",
column: "BranchID"
},
operator: "In",
values: [1],
filterType: models.FilterType.BasicFilter
}
var config = {
type: 'report',
tokenType: models.TokenType.Embed,
accessToken: accessToken,
embedUrl: embedUrl,
id: embedReportId,
permissions: models.Permissions.All,
viewMode: models.ViewMode.View,
filters: [branchFilter],
settings: {
filterPaneEnabled: true,
navContentPaneEnabled: true
}
};
Here i want to apply filter where it will show data for only branch whose id is 1 but it is showing all data. Am i missing anything here? do i need to do anything report side also after this code?
Regards,
Savan
Applying filter using embed config is supported, and the code sample you've shared seems valid.
If report.setFilters fails as well, seems like something with the filter is wrong.
Try to verify the data type of the filter, should it be 1 or "1"?
If this doesn't work, is there any error message being returned from .setFilters request?:
const filter = { ... };
report.setFilters([filter])
.catch(errors => {
// Handle error
});
Learn more here about setting filters through JS SDK.
To show data based on the user logged into you application you have to apply Row Level Security (RLS) in your power bi report first and then follow the steps from following article https://learn.microsoft.com/en-us/power-bi/developer/embedded-row-level-security

Search with Relay doesn't include new results due to local cache

I've implemented a search-as-you-type component in React and Relay. It's roughly the same setup as search functionality using relay
It works as intended with one exception. New results from the server never appear when I retype a search I've already performed on the client. I looks like Relay always goes to the local cache in this case.
So, for example, say I've searched for 'foo' and didn't find any results. Now, seconds later, another user on the website creates this 'foo', but Relay will never query the server since the cached response to the 'foo' search was an empty result.
Is there a pattern or best practice for this scenario?
The query is as follows. I call this.props.relay.setVariables to perform the search:
initialVariables: {
search: '',
hasSearch: false
},
fragments: {
me: () => Relay.QL`
fragment on Viewer {
relationSearch(search: $search) #include(if: $hasSearch) {
... on User {
username
}
}
}
`
}
The answer seems to be to use this.props.relay.forceFetch with the search variables instead.
See https://facebook.github.io/relay/docs/api-reference-relay-container.html#forcefetch
Someone correct me if this isn't best practice.

Mutation not fetching data specified by the fat query in conjunction with RANGE_ADD?

I'm trying to understand RANGE_ADD. I've provided the mutation config, for RANGE_ADD, with all the required information. I use the viewer naming convention, with my connections nested within viewer. Below is what my complete Relay Mutation looks like ...
import Relay from "react-relay";
export default class CreateTeamMutation extends Relay.Mutation {
static fragments = {
viewer: () => Relay.QL`
fragment on Viewer {
id
}
`
}
getMutation() {
return Relay.QL`
mutation { createTeam }
`;
}
getVariables() {
return {
name: this.props.name
};
}
getFatQuery() {
return Relay.QL`
fragment on CreateTeamPayload {
edge,
viewer {
teams
}
}
`;
}
getConfigs() {
console.log(this.props.viewer);
return [{
type: "RANGE_ADD",
edgeName: "edge",
parentID: this.props.viewer.id,
parentName: "viewer",
connectionName: "teams",
rangeBehaviors: {
"": "append"
}
}];
}
};
I provide a fragment for the viewer id, which at run-time, within getConfigs, I can see is present.
In the GraphQL Mutation response payload, CreateTeamPayload, the viewer field is provided so that Relay can make use of the connection within the mutation config for RANGE_ADD. Also, within CreateTeamPayload, the new edge is provided as edge.
These three bits of info (viewer id for the parent, the connection information and the edge) seem to be all that RANGE_ADD demands. I also make sure to request this data from the server, via the fat query so that Relay has access to it for the mutation config.
Relay does not seem to be including what I've specified in the fat query, and what is required for the mutation config, in what it dispatches to the server, though. All that Relay is requesting is the clientMutationId. Here is the request made by Relay ...
{
"query": "mutation CreateTeamMutation($input_0:CreateTeamInput!){createTeam(input:$input_0){clientMutationId}}",
"variables": {
"input_0": {
"name": "foo bar",
"clientMutationId": "0"
}
}
}
And, in chain reaction fashion, this causes Relay, who is expecting the viewer and edge for the mutation config, to throw an error ...
Warning: writeRelayUpdatePayload(): Expected response payload to include the newly created edge `edge` and its `node` field. Did you forget to update the `RANGE_ADD` mutation config?
Those required fields could totally be there if Relay had included them. Does RANGE_ADD have to be accompanied by REQUIRED_CHILDREN for this to work? The mutation goes through to the server, and the record is created on the server, it's just the client-side mutation config fails to incorporate the changed data into the store.
This is probably a case of an ambiguous warning: https://github.com/facebook/relay/issues/542
Relay intersects the range behaviours with the previously fetched connections.
Your range behaviors here only define what to do when the teams connection is not under the influence of any call. How are you calling your teams connection in the app ?
For example if your app fetches teams(order_by: 'recent'), you should define a range behavior like this one 'orderby(recent)': 'append'

Resources