I am using Parse, and I have around 500 users. I just recently ran into a new issue where only some Users have their ACL's set, and they are not found when querying.Basically, I cannot access any users who have an ACL set. Any ideas? Thanks!
Try to make a custom function or background job in the cloud code (javascript) of your Parse application, and try to do something like this:
// bypass ACL restriction
Parse.Cloud.useMasterKey();
// take an user for test, fetch it
var someFetchedUser = ......;
// creating new acl object
var newACL = new Parse.ACL();
// define public read access on this acl object ( means: all users can read the owner row
// that will keep this acl ( so the someFetchedUser )
newACL.setPublicReadAccess(true);
// define possibility to write / delete just to this user ( so, the owner user himself )
newACL.setWriteAccess(someFetchedUser.id,true);
// set the new ACL to the fetched user ( overwrite previous one )
someFetchedUser.setAcl(newACL);
// apply modification ( should works since you're working with masterkey from the scratch )
someFetchedUser.save().then(function(savedUser){
// Do your stuff..
},function(err){
// Damn!!!! something was wrong....
});
It should change the ACL of an user ( change as needed to apply it to all the users)
Hope it helps!
Related
I have one Odata service , which is providing me the data and I am able to display this data on table. We are going to deploy this application to Launchpad. Now we have this requirement in which logged in user must get the data according to his/her login ID. So If my user ID is XXXXX , I should get the records only for XXXXX. I am unable to understand the process flow. Shall we implement the logic in Odata itself or should I get all the data and filter the model on UI, before displaying it.
Regards,
MS
In oData itself you can access login user by sy-uname. using that user you can filter your data.
OR
In front end you can access login user by below code
var vUrl = "proxy/sap/bc/ui2/start_up";
var oxmlHttp = null;
oxmlHttp = new XMLHttpRequest();
oxmlHttp.onreadystatechange = function() {
if (oxmlHttp.readyState == 4 && oxmlHttp.status == 200) {
var oUserData = JSON.parse(oxmlHttp.responseText);
vUser = oUserData.id;
}
};
oxmlHttp.open( "GET", vUrl, false );
oxmlHttp.send(null);
You have to handle this in Odata only. Get User id from UI using
var storename = sap.ushell.Container.getService("UserInfo").getId();
and set it to Odata to filter and send back the results.
You should handle such requirements in the Service level (OData level), not in the UI.
In DPC_EXT class variable SY-UNAME gives you the logged in user. So you should filter your records by deriving more information from that.
Below are the steps i'm following to create a custom audience based remote config condition -
First I created a user property called OEM
I created a dynamic link with utm_source as google-micromax
https://d83j2.app.goo.gl/?link=http://myapp.in&apn=com.myapp.app&utm_source=google-micromax&utm_medium=micromax_device&utm_campaign=promo_google_micromax
I created an OEM-Micromax audience with the condition that the user property OEM contains google-micromax
I then created a remote config condition based on the Micromax audience
I then handle the dynamic link and set the user property to the value returned from the link's utm_source
AppInvite.AppInviteApi.getInvitation(mGoogleApiClient, this, autoLaunchDeepLink)
.setResultCallback(
new ResultCallback<AppInviteInvitationResult>() {
#Override
public void onResult(AppInviteInvitationResult result) {
if (result.getStatus().isSuccess()) {
//First time user
if (StorageHelper.getBooleanObject(StorageHelper.FIRST_TIME_USER, true)) {
Intent intent = result.getInvitationIntent();
String deepLink = AppInviteReferral.getDeepLink(intent);
Uri uri = Uri.parse(deepLink);
String utm_source = uri.getQueryParameter("utm_source");
FirebaseEvents.setUserProperty(utm_source);
StorageHelper.setBooleanObject(StorageHelper.FIRST_TIME_USER, false);
}
FirebaseEvents.logEventInvite(true);
}
}
});
Now, when I fetch the oem_admob_banner_unit_id parameter from remote config, it still returns the Default value instead of the value for the Micromax audience.
What am I doing wrong ?
Not sure if this is related to your issue, but I also could not get an audience-driven remote config to work. (Mine happened to be an audience based on an app event/parameter so it's a little different scenario but maybe similar problem). It finally started working after I forced enough users in the audience by triggering my event repeatedly. Not sure how many it was, probably around 10.
after fetching, you should call
FIRRemoteConfig - (BOOL)activateFetched
Applies Fetched Config data to the Active Config, causing updates to the behavior and appearance
of the app to take effect (depending on how config data is used in the app).
Returns true if there was a Fetched Config, and it was activated.
Returns false if no Fetched Config was found, or the Fetched Config was already activated.
How can I create a system user in Sling?
I tried searching but all I find is related to AEM, which I don't use. Is it possible to create the user using Jackrabbit API or Sling Initial Content (descriptor files)?
I tried to execute the following:
curl -u admin:admin -F:name=myuser -Fpwd=mypwd -FpwdConfirm=mypwd -Frep:principalName=myuser -Fjcr:primaryType=rep:SystemUser http://localhost:8080/home/users/system/*
But there is an error:
*ERROR* [127.0.0.1 [1465215465364] POST /home/users/system/* HTTP/1.1] org.apache.sling.servlets.post.impl.operations.ModifyOperation Exception during response processing.
javax.jcr.nodetype.ConstraintViolationException: Property is protected: rep:principalName = myuser
at org.apache.jackrabbit.oak.jcr.delegate.NodeDelegate.setProperty(NodeDelegate.java:525)
at org.apache.jackrabbit.oak.jcr.session.NodeImpl$35.perform(NodeImpl.java:1358)
at org.apache.jackrabbit.oak.jcr.session.NodeImpl$35.perform(NodeImpl.java:1346)
at org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.perform(SessionDelegate.java:209)
at org.apache.jackrabbit.oak.jcr.session.ItemImpl.perform(ItemImpl.java:112)
at org.apache.jackrabbit.oak.jcr.session.NodeImpl.internalSetProperty(NodeImpl.java:1346)
at org.apache.jackrabbit.oak.jcr.session.NodeImpl.setProperty(NodeImpl.java:432)
at org.apache.sling.servlets.post.impl.helper.SlingPropertyValueHandler.store(SlingPropertyValueHandler.java:592)
There is an out-of-the box solution based on Sling and Jackrabbit Oak. It features a text-based DSL for setting users and ACLs, for instance:
create service user bob,alice
set ACL on /libs,/apps
remove * for alice
allow jcr:read for bob
end
It is also possible to embed these instructions in the provisioning model used to build a Sling launchpad - assuming you're using the slingstart-maven-plugin.
The complete documentation can be found at Repository Initializers and Repository Initialization Language
Not sure this is possible through a post request per: https://mail-archives.apache.org/mod_mbox/sling-users/201512.mbox/%3CCAFMYLMb9Wiy+DYmacc5oT7YRWT1hth8j1XAAo_sKT8uq9HoFNw#mail.gmail.com%3E
The suggested solution is to use the jackrabbit api to do this. This would look something like:
//get a user manager
try {
User systemUser = userManager.createSystemUser("myuser", "/home/users/system");
} catch (Exception e) {
log.error("Error adding user",e);
throw e;
}
//commit changes
It's very important to note that this doesn't allow you to set a password for this user, nor can one be set with user.changePassword() -- when I try that I get an error:
javax.jcr.UnsupportedRepositoryOperationException: system user
From the java doc:
Create a new system user for the specified userID. The new authorizable is required to have the following characteristics:
User.isSystemUser() returns true.
The system user doesn't have a password set and doesn't allow change the password.
http://jackrabbit.apache.org/api/2.10/org/apache/jackrabbit/core/security/user/UserManagerImpl.html
Here's my whole activator class: https://gist.github.com/scrupulo/61b574c9aa1838da37d456012af5dd50
Gmail API fails for one domain when retrieving messages with this error:
com.google.api.client.googleapis.json.GoogleJsonResponseException: 403 OK
{
"code" : 403,
"errors" : [ {
"domain" : "global",
"message" : "Delegation denied for <user email>",
"reason" : "forbidden"
} ],
"message" : "Delegation denied for <user email>"
}
I am using OAuth 2.0 and Google Apps Domain-Wide delegation of authority to access the user data. The domain has granted data access rights to the application.
Seems like best thing to do is to just always have userId="me" in your requests. That tells the API to just use the authenticated user's mailbox--no need to rely on email addresses.
I had the same issue before, the solution is super tricky, you need to impersonate the person you need to access gmail content first, then use userId='me' to run the query. It works for me.
here is some sample code:
users = # coming from directory service
for user in users:
credentials = service_account.Credentials.from_service_account_file(
SERVICE_ACCOUNT_FILE, scopes=SCOPES)
####IMPORTANT######
credentials_delegated = credentials.with_subject(user['primaryEmail'])
gmail_service = build('gmail', 'v1', credentials=credentials_delegated)
results = gmail_service.users().labels().list(userId='me').execute()
labels = results.get('labels', [])
for label in labels:
print(label['name'])
Our users had migrated into a domain and their account had aliases attached to it. We needed to default the SendAs address to one of the imported aliases and want a way to automate it. The Gmail API looked like the solution, but our privileged user with roles to make changes to the accounts was not working - we kept seeing the "Delegation denied for " 403 error.
Here is a PHP example of how we were able to list their SendAs settings.
<?PHP
//
// Description:
// List the user's SendAs addresses.
//
// Documentation:
// https://developers.google.com/gmail/api/v1/reference/users/settings/sendAs
// https://developers.google.com/gmail/api/v1/reference/users/settings/sendAs/list
//
// Local Path:
// /path/to/api/vendor/google/apiclient-services/src/Google/Service/Gmail.php
// /path/to/api/vendor/google/apiclient-services/src/Google/Service/Gmail/Resource/UsersSettingsSendAs.php
//
// Version:
// Google_Client::LIBVER == 2.1.1
//
require_once $API_PATH . '/path/to/google-api-php-client/vendor/autoload.php';
date_default_timezone_set('America/Los_Angeles');
// this is the service account json file used to make api calls within our domain
$serviceAccount = '/path/to/service-account-with-domain-wide-delagation.json';
putenv('GOOGLE_APPLICATION_CREDENTIALS=' . $serviceAccount );
$userKey = 'someuser#my.domain';
// In the Admin Directory API, we may do things like create accounts with
// an account having roles to make changes. With the Gmail API, we cannot
// use those accounts to make changes. Instead, we impersonate
// the user to manage their account.
$impersonateUser = $userKey;
// these are the scope(s) used.
define('SCOPES', implode(' ', array( Google_Service_Gmail::GMAIL_SETTINGS_BASIC ) ) );
$client = new Google_Client();
$client->useApplicationDefaultCredentials(); // loads whats in that json service account file.
$client->setScopes(SCOPES); // adds the scopes
$client->setSubject($impersonateUser); // account authorized to perform operation
$gmailObj = new Google_Service_Gmail($client);
$res = $gmailObj->users_settings_sendAs->listUsersSettingsSendAs($userKey);
print_r($res);
?>
I wanted to access the emails of fresh email id/account but what happened was, the recently created folder with '.credentials' containing a JSON was associated with the previous email id/account which I tried earlier. The access token and other parameters present in JSON are not associated with new email id/account. So, in order make it run you just have to delete the '.credentails' folder and run the program again. Now, the program opens the browser and asks you to give permissions.
To delete the folder containing files in python
import shutil
shutil.rmtree("path of the folder to be deleted")
you may add this at the end of the program
Recently I started exploring Gmail API and I am following the same approach as Guo mentioned. However, it is going to take of time and too many calls when we the number of users or more. After domain wide delegation my expectation was admin id will be able to access the delegated inboxes, but seems like we need to create service for each user.
I have a Rails 3.2 application that uses Redis as it's session store. Now I'm about to write a part of new functionality in Node.js, and I want to be able to share session information between the two apps.
What I can do manually is read the _session_id cookie, and then read from a Redis key named rack:session:session_id, but this looks kind of like a hack-ish solution.
Is there a better way to share sessions between Node.js and Rails?
I have done this but it does require making your own forks of things
Firstly you need to make the session key the same name. That's the easiest job.
Next I created a fork of the redis-store gem and modified where the marshalling. I need to talk json on both sides because finding a ruby style marshal module for javascript is not easy. The file where I alter marshalling
I also needed to replace the session middleware portion of connect. The hash that is created is very specific and doesn't match the one rails creates. I will need to leave this to you to work out because there might be a nicer way. I could have forked connect but instead I extracted a copy of connect > middleware > session out and required my own in.
You'll notice how the original adds in a base variable which aren't present in the rails version. Plus you need to handle the case when rails has created a session instead of node, that is what the generateCookie function does.
/***** ORIGINAL *****/
// session hashing function
store.hash = function(req, base) {
return crypto
.createHmac('sha256', secret)
.update(base + fingerprint(req))
.digest('base64')
.replace(/=*$/, '');
};
// generates the new session
store.generate = function(req){
var base = utils.uid(24);
var sessionID = base + '.' + store.hash(req, base);
req.sessionID = sessionID;
req.session = new Session(req);
req.session.cookie = new Cookie(cookie);
};
/***** MODIFIED *****/
// session hashing function
store.hash = function(req, base) {
return crypto
.createHmac('sha1', secret)
.update(base)
.digest('base64')
.replace(/=*$/, '');
};
// generates the new session
store.generate = function(req){
var base = utils.uid(24);
var sessionID = store.hash(req, base);
req.sessionID = sessionID;
req.session = new Session(req);
req.session.cookie = new Cookie(cookie);
};
// generate a new cookie for a pre-existing session from rails without session.cookie
// it must not be a Cookie object (it breaks the merging of cookies)
store.generateCookie = function(sess){
newBlankCookie = new Cookie(cookie);
sess.cookie = newBlankCookie.toJSON();
};
//... at the end of the session.js file
// populate req.session
} else {
if ('undefined' == typeof sess.cookie) store.generateCookie(sess);
store.createSession(req, sess);
next();
}
I hope this works for you. It took me quite a bit of digging around to make them talk the same.
I found an issue as well with flash messages being stored in json. Hopefully you don't find that one. Flash messages have a special object structure that json blows away when serializing. When the flash message is restored from the session you might not have a proper flash object. I needed to patch for this too.
This may be completely unhelpful if you're not planning on using this, but all of my session experience with node is through using Connect. You could use the connect session middlewhere and change the key id:
http://www.senchalabs.org/connect/session.html#session
and use this module to use redis as your session store:
https://github.com/visionmedia/connect-redis
I've never setup something like what your describing though, there may be some necessary hacking.