Below are the steps i'm following to create a custom audience based remote config condition -
First I created a user property called OEM
I created a dynamic link with utm_source as google-micromax
https://d83j2.app.goo.gl/?link=http://myapp.in&apn=com.myapp.app&utm_source=google-micromax&utm_medium=micromax_device&utm_campaign=promo_google_micromax
I created an OEM-Micromax audience with the condition that the user property OEM contains google-micromax
I then created a remote config condition based on the Micromax audience
I then handle the dynamic link and set the user property to the value returned from the link's utm_source
AppInvite.AppInviteApi.getInvitation(mGoogleApiClient, this, autoLaunchDeepLink)
.setResultCallback(
new ResultCallback<AppInviteInvitationResult>() {
#Override
public void onResult(AppInviteInvitationResult result) {
if (result.getStatus().isSuccess()) {
//First time user
if (StorageHelper.getBooleanObject(StorageHelper.FIRST_TIME_USER, true)) {
Intent intent = result.getInvitationIntent();
String deepLink = AppInviteReferral.getDeepLink(intent);
Uri uri = Uri.parse(deepLink);
String utm_source = uri.getQueryParameter("utm_source");
FirebaseEvents.setUserProperty(utm_source);
StorageHelper.setBooleanObject(StorageHelper.FIRST_TIME_USER, false);
}
FirebaseEvents.logEventInvite(true);
}
}
});
Now, when I fetch the oem_admob_banner_unit_id parameter from remote config, it still returns the Default value instead of the value for the Micromax audience.
What am I doing wrong ?
Not sure if this is related to your issue, but I also could not get an audience-driven remote config to work. (Mine happened to be an audience based on an app event/parameter so it's a little different scenario but maybe similar problem). It finally started working after I forced enough users in the audience by triggering my event repeatedly. Not sure how many it was, probably around 10.
after fetching, you should call
FIRRemoteConfig - (BOOL)activateFetched
Applies Fetched Config data to the Active Config, causing updates to the behavior and appearance
of the app to take effect (depending on how config data is used in the app).
Returns true if there was a Fetched Config, and it was activated.
Returns false if no Fetched Config was found, or the Fetched Config was already activated.
Related
I am writing a Slack integration that can boot certain users out of public channels when certain conditions are met. I have added several OAuth scopes to the bot token, including the following:
channels:history
channels:manage
channels:read
chat:write
chat:write.public
groups:write
im:write
mpim:write
users:read
I am writing my bot in Python using the slack-bolt library and asyncio. However when I try to invoke this code:
await app.client.conversations_kick(channel=channel_id, user=user_id)
I get the following error:
slack_sdk.errors.SlackApiError: The request to the Slack API failed. (url: https://www.slack.com/api/conversations.kick)
The server responded with: {'ok': False, 'error': 'channel_not_found'}
I know for a fact that both the channel_id and user_id arguments I'm passing in are valid. The channel ID I'm using is the string C01PAE3DB0A. I know it is valid because I can use the very same value for channel_id in the following API call:
response = await app.client.conversations_info(channel=channel_id)
And when I call conversations_info like that I get all of the information about my channel. (The same is true for calling users_info with the user_id - it returns successfully.) So why is that when I pass my valid channel_id parameter to conversations_kick I consistently receive this channel_not_found error? What am I missing?
So I got in touch directly with Slack support about this and they confirmed that there is a bug on their end. Specifically, the bug is that I should have received a restricted_action error response instead of a channel_not_found response. Apparently this is a known issue that is on their backlog.
The reason the API call would (try to) return this restricted_action error is simply because there is a workspace setting that, by default, prevents non-admins from kicking people out of public channels. Furthermore, this setting can only be changed by the workspace owner - one tier above admins.
But assuming you are the owner of the Slack workspace, you simply have to log into the Settings & Permissions page, which should look something like this:
And then you have to change the setting labeled "People who can remove members from public channels" from "Workspace admins and owners only (default)" to "Everyone, except guests."
Once I made that change, my API calls started succeeding.
I'm trying to implement the code example in this repo:
https://github.com/autodesk-platform-services/aps-simple-viewer-dotnet
While launching in debugging mode, I get an error in the AuthController.cs says:
Could not list models. See the console for more details
I didn't make any significant changes to the original code, I only changed the env vars (client id, secret etc..)
The error is on the below function:
async function setupModelSelection(viewer, selectedUrn) {
const dropdown = document.getElementById('models');
dropdown.innerHTML = '';
try {
const resp = await fetch('/api/models');
if (!resp.ok) {
throw new Error(await resp.text());
}
const models = await resp.json();
dropdown.innerHTML = models.map(model => `<option value=${model.urn} ${model.urn === selectedUrn ? 'selected' : ''}>${model.name}</option>`).join('\n');
dropdown.onchange = () => onModelSelected(viewer, dropdown.value);
if (dropdown.value) {
onModelSelected(viewer, dropdown.value);
}
} catch (err) {
alert('Could not list models. See the console for more details.');
console.error(err);
}
}
I get an access token so my client id and secret are probably correct, I also added the app to the cloud hub, what could be the problem, why the app can't find the projects in the hub?
I can only repeat what AlexAR said - the given sample is not for accessing files from user hubs like ACC/BIM 360 Docs - for that follow this: https://tutorials.autodesk.io/tutorials/hubs-browser/
To address the specific error. One way I can reproduce that is if I set the APS_BUCKET variable to something simple that has likely been used by someone else already, e.g. "mybucket", and so I'll get an error when trying to access the files in it, since it's not my bucket. Bucket names need to be globally unique. If you don't want to come up with a unique name yourself, then just do not declare the APS_BUCKET environment variable and the sample will generate a bucket name for you based on the client id of your app.
I am making a worklist application using SAPUI5. The problem is that when I create an entry and then create another one right after that, I get the following error:
Default changeset implementation allows only one operation.
I checked the $batch header and I see that there is a MERGE and a POST, with the MERGE updating the previous entry for some reason. Can anyone shed some light? Could it be a backend error and not a UI5 error?
Creating the new entry:
_onMetadataLoaded: function() {
var oModel = this.getView().getModel();
var that = this;
// ...
oModel.read("/USERS_SET", {
success: function(oData) {
var oProperties = {
Qmnum: "0",
Otherstuff: "cool"
};
that._oContext = that._oView.getModel().createEntry("/ENTITYSET", {
properties: oProperties
});
that.getView().setBindingContext(that._oContext);
// ...
}
});
},
handleSavePress: function(oEvent) {
// ...
this.getView().getModel().submitChanges({
success: function(oData) {
// ...
},
error: function(oError) {
// ...
}
});
},
tl-dr: Apparently you must be using the SAP Gateway. If you do not need to process those requests in one transaction then send them in different changesets. If you do not need batch calls at all consider turning it off by supplying your model with "useBatch": false upon instantiation. However if you need to process the requests together in one transaction then you have to read the details below.
In order to understand the problem you have to understand how the gateway and the batch and changeset requests work.
Batch requests consist of multiple requests bundled together. The purpose is to open only one connection and group together relevant requests so that the overhead is minimalized. Changesets form smaller blocks inside batch requests, where modification requests can be bundled and processed together in order to ensure an all-or-nothing characteristic.
So on the gateway side: there are two relevant classes for your OData service, assuming that you have used the SAP Gateway Service Builder (SEGW transaction). There is one with the ending ...DPC and one with ...DPC_EXT. Don't touch the former, it will be always regenerated when you update your service in the service builder. The latter is the one that we will need in this example. You will have to redefine at least two methods:
/IWBEP/IF_MGW_APPL_SRV_RUNTIME~CHANGESET_BEGIN
/IWBEP/IF_MGW_APPL_SRV_RUNTIME~CHANGESET_PROCESS
By default the changeset_begin method will only allow changeset processing for changesets where the number of requests equals to one. This can be handled automatically that's why a limitation exists. If there were more requests one could not ensure their processing automatically as they could have a business dependency on each other.
So make sure to allow a bundled (deferred mode) processing of changesets under the desired conditions:
/IWBEP/IF_MGW_APPL_SRV_RUNTIME~CHANGESET_BEGIN: first call the super->/iwbep/if_mgw_appl_srv_runtime~changeset_begin method in a try catch block, then loop at it_operation_info to decide and narrow down processing only in selected cases and then allow cv_defer_mode only for the selected cases, otherwise throw a /iwbep/cx_mgw_tech_exception=>changeset_not_supported exception.
/IWBEP/IF_MGW_APPL_SRV_RUNTIME~CHANGESET_PROCESS: all requests will be available in the it_changeset_request. Make sure to fill the ct_changeset_response table with the responses.
METHOD /iwbep/if_mgw_appl_srv_runtime~changeset_process.
DATA:
lv_operation_counter TYPE i VALUE 0,
lr_context TYPE REF TO /iwbep/cl_mgw_request,
lr_entry_provider TYPE REF TO /iwbep/if_mgw_entry_provider,
lr_message_container TYPE REF TO /iwbep/if_message_container,
lr_entity_data TYPE REF TO data,
ls_context_details TYPE /iwbep/if_mgw_core_srv_runtime=>ty_s_mgw_request_context,
ls_changeset_response LIKE LINE OF ct_changeset_response.
FIELD-SYMBOLS:
<fs_ls_changeset_request> LIKE LINE OF it_changeset_request.
LOOP AT it_changeset_request ASSIGNING <fs_ls_changeset_request>.
lr_context ?= <fs_ls_changeset_request>-request_context.
lr_entry_provider = <fs_ls_changeset_request>-entry_provider.
lr_message_container = <fs_ls_changeset_request>-msg_container.
ls_context_details = lr_context->get_request_details( ).
CASE ls_context_details-target_entity.
WHEN 'SomeEntity'.
"Do the processing here
WHEN OTHERS.
ENDCASE.
ENDLOOP.
ENDMETHOD.
From the error I can tell you must be using SAP GW :-) This happens only for batch requests containing more than one create/delete/update calls and it's related to transaction security ("all or nothing"). What you have to do is redefining the corresponding GW method, I think it was CHANGESET_BEGIN. See https://archive.sap.com/discussions/thread/3562720 for some details (can't offer more for now...).
I am trying to update Jira issue fields through REST Api, I am able to update summary, description, priority, reporter fields but the status.
Here is the code I am trying to run:
string jSonContent = (#"
{
""fields"": {
""summary"": ""data"",
""description"": ""modified."",
""priority"": {""name"": ""val""},
""reporter"": {""name"": ""abcdef#gmail.com""},
""status"": {""name"": ""WORK IN PROGRESS""}
}
}").Replace("data", summ).Replace("modified.", desc).Replace("val", pri);
request.AddParameter("application/json", jSonContent, ParameterType.RequestBody);
var response = Execute(request);
You cannot change the status of an issue the way like that.
To determine what type of fields could be changed with a simple PUT request do a GET for metadata:
https://{your-jira-url}/rest/api/2/issue/{issueIdOrKey}/editmeta
This query in turn will provide you all the fields that you can modify. You won't find status field in the returned JSON object.
Back to your problem: How could be the status of an issue changed? In Jira you have a workflow that holds the possible transition between the states. In order to change the state you need to do a transition. (Exactly the same way as you would do it on UI.)
So first do a GET request like that:
https://{your-jira-url}/rest/api/2/issue/{issueIdOrKey}/transitions?expand=transitions.fields
This request will return all possible transitions of your issue's current state. Check which transition you want to perform and note it's ID (in my case the wished ID is 11). With this transition ID you can do a POST request with the JSON payload:
https://{your-jira-url}/rest/api/2/issue/{issueIdOrKey}/transitions
{
"transition": {
"id": "11"
}
}
One additional thing to note: If your transition isn't a simple one then you have to provide more data. I mean a simple transition here where you simply would click on a button on the UI and you wouldn't get an extra screen for the transition. (E.g. you can setup a transition like: you only could resolve an issue if you add a comment to it.) Fortunately, the previously returned transition list contains all the fields that could or that must be provided together with the transition ID.
You can find more information in official Jira documentation.
I see that the workflow is to start authrorizer, giving it file loader. So, we have a sequence of callbacks, onAuthrorized => start loading file => doc.getModel() on file load. Here they say how you get the model. But, I also see that gapi.drive.realtime.load(fileId, onFileLoaded, initializeModel, handleErrors) can elso end up with TOKEN_REFRESH_REQUIRED and it seems that TOKEN_REFRESH_REQUIRED can fire after the document is loaded, after some time of user inactivity, which seems to be related with token expiration. How should re-authorization to go? Should I tell the client that the current model that he is connected to is invalid? Please note that my app starts on file load. So, if I go the whole stack re-authorization, which calls another file load, which calls another document loaded will re-start my application. Is it supposed way to go? To put in other words, is there a way to refresh the token without loosing existing connection?
Where is the token stored actually? I do not see that I receive it on authorized. It is not passed to the realtime.load. How does realtime.load knows about the token? How can I speed up the token expiration for debug?
I am still not sure that this is a right answer but this is what I have got looking at code here, which says that we should provide empty callback to re-authorize
/**
* Reauthorize the client with no callback (used for authorization failure).
* #param onAuthComplete {Function} to call once authorization has completed.
*/
rtclient.Authorizer.prototype.authorize = function(onAuthComplete) {
function authorize() {
gapi.auth.authorize({client_id: rtclient.id, scope: ['install', 'file'],}, handleAuthResult)
}
function handleAuthResult(authResult) {
if (authResult && !authResult.error) {
hideAuthorizationButton() && onAuthComplete()
} else with (authorizationButton) {
display = 'block' ;
onclick = authorize;
}
}
You call it first use it in a function to load your document
(rtclient.authorizer ? rtclient.authorizer = identity : rtclient.authorize) (proceedToLoadingTheFile)
But later, on timeout we have code
function handleErrors(e) { with(gapi.drive.realtime.ErrorType) {switch(e.type) {
case TOKEN_REFRESH_REQUIRED: rtclient.authorizer.authorize() ; break
case CLIENT_ERROR: ...
Note no arguemnts in the latter. Authorizer won't reload the document. I think that this explains the logic asked. However, it does not answer about the internals, how is it possible that loader takes on existing authorizer or switches to a new one.