Openldap JNDI extended operations - jndi

I have openldap with extended operations like:
pwdAttribute: userPassword
pwdMaxAge: 7776002
pwdExpireWarning: 432000
pwdInHistory: 3
pwdCheckQuality: 1
pwdMinLength: 6
pwdMaxFailure: 3
pwdLockout: TRUE
pwdLockoutDuration: 900
pwdGraceAuthNLimit: 0
pwdFailureCountInterval: 0
pwdMustChange: TRUE
pwdAllowUserChange: TRUE
pwdSafeModify: FALSE
I would like know by JNDI (LdapContext) how to obtain the expire warning, etc.

You need to add some extended controls. See the PasswordPolicyRequest/ResponseControl code and factory I posted in the Oracle forums.

Related

Apache superset caching customization

I'm trying to integrate apache superset to my multi tenant application and I'm having the below issue regarding caching:
superset provide almost everything we need to be easily configured in superset_config.py, but in my case I'm trying to set the caching key to have the tenant ID so I can have data segregation between the tenants, so what I should do is to get the tenant ID from the session and append it to the key
Note that I'm running superset:2.0.0
Below are my caching configurations in superset_config.py:
FILTER_STATE_CACHE_CONFIG = {
'CACHE_TYPE': 'RedisCache',
'CACHE_DEFAULT_TIMEOUT': 86400,
'CACHE_KEY_PREFIX': "FILTER_STATE_CACHE_CONFIG",
'CACHE_REDIS_URL': 'redis://xxx.xxx.xxx.xxx:6379/0'
}
DATA_CACHE_CONFIG = {
"CACHE_TYPE": "RedisCache",
"CACHE_KEY_PREFIX": "DATA_CACHE_CONFIG_", # make sure this string is unique to avoid collisions
"CACHE_DEFAULT_TIMEOUT": 86400, # 60 seconds * 60 minutes * 24 hours
'CACHE_REDIS_URL': 'redis://xxx.xxx.xxx.xxx:6379/0'
}
EXPLORE_FORM_DATA_CACHE_CONFIG = {
"CACHE_TYPE": "RedisCache",
"CACHE_KEY_PREFIX": "EXPLORE_FORM_DATA_CACHE_CONFIG_", # make sure this string is unique to avoid collisions
"CACHE_DEFAULT_TIMEOUT": 86400, # 60 seconds * 60 minutes * 24 hours
'CACHE_REDIS_URL': 'redis://xxx.xxx.xxx.xxx:6379/0'
}
What I did is I update the cache.py file in superset/utils/
and I amend the set_and_log_cache function to read the tenant id from the session:
cache_key = cache_key + "" + "Tenant:" + str(session["Tenant_Id"])
I can see the keys have their corresponding tenant ids in the redis CLI but the caching is not working in superset!
Is there any sort of configuration I should add or anything that I'm missing?
I would really be thankful for any kind of help
Note that I'm using docker exec command to enter the container then I made my code changes and committed to a new image, then I use the new image in the docker-compose-non-dev.yml

Time on page calculated only for specific segment in Adobe Analytics

Goal
I would like to see what is the time on page for user who is logged in. Eliminate from reports time, while user was not logged in.
To have ability to distinguish between time on page while user is not logged in and time on page while he is logged in.
Setup
Let's say we have:
Traffic variable User logged in as a prop1 where is true or false.
Traffic variable Time from previous event as a prop2 in seconds
eVar1 duplicating prop1 | expire after event5
eVar2 duplicating prop2 | expire after event5
event4 - User logged in
event5 - User logged out
Time between events
From an article about measuring time between events (https://experienceleaguecommunities.adobe.com/t5/adobe-analytics-questions/calculate-time-between-success-events/qaq-p/302787)
if (s.events && (s.events + ",").indexOf("event4,") > -1) {
s.prop2 = "start"
}
if (s.events && (s.events + ",").indexOf("event5,") > -1) {
s.prop2 = "stop"
}
s.prop2 = s.getTimeToComplete(s.prop2, "TTC", 0);
s.getTimeToComplete = new Function("v", "cn", "e", "var s=this,d=new Date,x=d,k;if(!s.ttcr){e=e?e:0;if(v=='start'||v=='stop')s.ttcr=1;x.setTime(x.getTime()+e* 86400000);if(v=='start'){s.c_w(cn,d.getTime(),e?x:0);return '';}if(v=='stop'){k=s.c_r(cn);if(!s.c_w(cn,'',d)||!k)return '';v=(d.getTime()-k)/1000;var td=86400,th=3600,tm=60,r=5,u,un;if(v>td){u=td;un='days';}else if(v>th){u=th;un='hours';}else if(v>tm){r=2;u=tm;un='minutes';}else{r=.2;u=1;un='seconds';}v=v*r/u;return (Math.round(v)/r)+' '+un;}}return '';");
Time spent overview
From adobe docs (https://docs.adobe.com/content/help/en/analytics/components/metrics/time-spent.html)
A “sequence” is a consecutive set of hits where a given variable
contains the same value (whether by being set, spread forward, or
persisted). For example, prop1 “A” has two sequences: hits 1 & 2 and
hit 6. Values on the last hit of the visit do not start a new sequence
because the last hit has no time spent. Average time spent on site
uses sequences in the denominator.
So I guess I will uses prop1 as a denominator for logged in user state to count time between event in prop2 properly.
Problem
I am not pretty sure, If this approach is enough to correctly measure time spent only while user is logged in. I would appreciate some hints, how to set up eVars correctly or if I understand sequence denominator correctly.
I also set up eVars with terminating event5, but I am not sure, If this leads to desired behavior.
If you also solve this problem before, please can you lead me, how you define your segment or condition in reports.
GetTimeBetweenEvents plugin should do a job. However, it seems like it was rewritten, I have found in documentation example calls also using Launch plugins extension:
https://docs.adobe.com/content/help/en/analytics/implementation/vars/plugins/gettimebetweenevents.html
From Adobe documentation
Install the plug-in using AppMeasurement Copy and paste the following
code anywhere in the AppMeasurement file after the Analytics tracking
object is instantiated (using s_gi ). Preserving comments and version
numbers of the code in your implementation helps Adobe with
troubleshooting any potential issues.
/******************************************* BEGIN CODE TO DEPLOY *******************************************/
/* Adobe Consulting Plugin: getTimeBetweenEvents v2.1 (Requires formatTime and inList plug-ins) */
s.getTimeBetweenEvents=function(ste,rt,stp,res,cn,etd,fmt,bml,rte){var s=this;if("string"===typeof ste&&"undefined"!==typeof rt&&"string"===typeof stp&&"undefined"!==typeof res){cn=cn?cn:"s_tbe";etd=isNaN(etd)?1:Number(etd);var f=!1,g=!1,n=!1, p=ste.split(","),q=stp.split(",");rte=rte?rte.split(","):[];for(var h=s.c_r(cn),k,v=new Date,r=v.getTime(),c=new Date,a=0; a<rte.length;++a)s.inList(s.events,rte[a])&&(n=!0);c.setTime(c.getTime()+864E5*etd);for(a=0;a<p.length&&!f&&(f=s.inList(s.events,p[a]),!0!==f);++a);for(a=0;a<q.length&&!g&&(g=s.inList(s.events,q[a]),!0!==g);++a);1===p.length&&1===q.length&&ste===stp&&f&&g?(h&&(k=(r-h)/1E3),s.c_w(cn,r,etd?c:0)):(!f||1!=rt&&h||s.c_w(cn,r,etd?c:0),g&&h&&(k=(v.getTime()-h)/1E3,!0===res&&(n=!0)));!0===n&&(c.setDate( c.getDate()-1),s.c_w(cn,"",c));return k?s.formatTime(k,fmt,bml):""}};
/* Adobe Consulting Plugin: formatTime v1.1 (Requires inList plug-in) */
s.formatTime=function(ns,tf,bml){var s=this;if(!("undefined"===typeof ns||isNaN(ns)||0>Number(ns))){if("string"===typeof tf&&"d"===tf||("string"!==typeof tf||!s.inList("h,m,s",tf))&&86400<=ns){tf=86400;var d="days";bml=isNaN(bml)?1:tf/(bml*tf)} else"string"===typeof tf&&"h"===tf||("string"!==typeof tf||!s.inList("m,s",tf))&&3600<=ns?(tf=3600,d="hours", bml=isNaN(bml)?4: tf/(bml*tf)):"string"===typeof tf&&"m"===tf||("string"!==typeof tf||!s.inList("s",tf))&&60<=ns?(tf=60,d="minutes",bml=isNaN(bml)?2: tf/(bml*tf)):(tf=1,d="seconds",bml=isNaN(bml)?.2:tf/bml);ns=Math.round(ns*bml/tf)/bml+" "+d;0===ns.indexOf("1 ")&&(ns=ns.substring(0,ns.length-1));return ns}};
/* Adobe Consulting Plugin: inList v2.1 */
s.inList=function(lv,vtc,d,cc){if("string"!==typeof vtc)return!1;if("string"===typeof lv)lv=lv.split(d||",");else if("object"!== typeof lv)return!1;d=0;for(var e=lv.length;d<e;d++)if(1==cc&&vtc===lv[d]||vtc.toLowerCase()===lv[d].toLowerCase())return!0;return!1};
/******************************************** END CODE TO DEPLOY ********************************************/
Then your eVar may looks like:
s.eVar1 = s.getTimeBetweenEvents("event1", true, "event2", true, "", 0, "s", 2, "event3");

graph builder of dse configuration

It seems like messing with the configuration, like trying to tune the number of thread readers for vertices & edges, cause a lot of unexplained exceptions, also there is an issue with trying to set the batch size.
It seems to only work with the default settings produced by the executable.
I've got a lot of exception, while trying "to play" with those values,
some of them are
[1] CAS exception of cassandra, something about the inability to create more partition keys.
[2] Cassandra timeout during write query at consistency ONE
and more.
As there is no reference within the documentation about how to solve those issues, I don't know how to continue. It seems like everything there is very delicate and shaky, so any little change causes tons of exceptions.
This is for using graph-loader-6.0.1 and dse 6.0.0 or dse 6.0.1
For example :
com.datastax.dsegraphloader.exception.LoadingException: com.datastax.driver.core.exceptions.InvalidQueryException: Resource limited exceeded on added vertices, properties and edges. Maximum of 100000 allowed. Please split transaction into multiple smaller ones and retry.
This is what I get when I try to use some config.
this is the groovy file for configuration :
// config
config preparation: false
config create_schema: false
config load_new: true
config load_edge_threads: 5
config load_vertex_threads: 5
config batch_size: 5000
// orders
inputfiledir = '/home/dseuser/'
profileInput = File.text(inputfiledir + "soc-pokec-profiles.txt").
delimiter("\t").header('user_id','public','completion_percentage','gender','region','last_login','registration','age',
'body','I_am_working_in_field','spoken_languages','hobbies','I_most_enjoy_good_food','pets','body_type',
'my_eyesight','eye_color','hair_color','hair_type','completed_level_of_education','favourite_color',
'relation_to_smoking','relation_to_alcohol','sign_in_zodiac','on_pokec_i_am_looking_for','love_is_for_me',
'relation_to_casual_sex','my_partner_should_be','marital_status','children','relation_to_children','I_like_movies',
'I_like_watching_movie','I_like_music','I_mostly_like_listening_to_music','the_idea_of_good_evening',
'I_like_specialties_from_kitchen','fun','I_am_going_to_concerts','my_active_sports','my_passive_sports','profession',
'I_like_books','life_style','music','cars','politics','relationships','art_culture','hobbies_interests',
'science_technologies','computers_internet','education','sport','movies','travelling','health','companies_brands',
'holder1','holder2')
relationInput = File.text(inputfiledir + "soc-pokec-relationships.txt").
delimiter("\t").header('auser','buser')
profileInput = profileInput.transform {
if (it['completion_percentage'] == 'null') { it.remove('completion_percentage')};
if (it['gender'] == 'null') { it.remove('gender')};
if (it['last_login'] == 'null') { it.remove('last_login')};
if (it['registration'] == 'null') { it.remove('registration')};
if (it['age'] == 'null') { it.remove('age')};
it
}
load(profileInput).asVertices {
label "user"
key "user_id"
}
load(relationInput).asEdges {
label "relation"
outV "auser", {
label "user"
key "user_id"
}
inV "buser", {
label "user"
key "user_id"
}
}
I tried to use the soc-pokec (social network) from stanford (available in web).
I had to loose most of the config to solve the issue.
Note that there is totally no correlation between the numbers in the exception, and the settings I mad in the config.

How to setup service method caching in grails

My application has a couple of services that make external calls via httpClient (GET and POST) that are unlikely to change in months, but they are slow; making my application even slower.
Clarification: this is NOT about caching GORM/hibernate/queries to my db.
How can I cache these methods (persistence on disk gets bonus points...) in grails 2.1.0?
I have installed grails-cache-plugin but it doesn't seem to be working, or i configured it wrong (very hard to do since there are 2-5 lines to add only, but i've managed to do it in the past)
I also tried setting up an nginx proxy cache in front of my app, but when i submit one of my forms with slight changes, I get the first submission as result.
Any suggestions/ideas will be greatly appreciated.
EDIT: Current solution (based on Marcin's answer)
My config.groovy: (the caching part only)
//caching
grails.cache.enabled = true
grails.cache.clearAtStartup = false
grails.cache.config = {
defaults {
timeToIdleSeconds 3600
timeToLiveSeconds 2629740
maxElementsInMemory 1
eternal false
overflowToDisk true
memoryStoreEvictionPolicy 'LRU'
}
diskStore {
path 'cache'
}
cache {
name 'scoring'
}
cache {
name 'query'
}
}
The important parts are:
do not clear at startup (grails.cache.clearAtStartup = false)
overflowToDisk=true persists all results over maxElementsInMemory
maxElementsInMemory=1 reduced number of elements in memory
'diskStore' should be writable by the user running the app.
Grails Cache Plugin works quite well for me under Grails 2.3.11. Documentation is pretty neat, but just to show you a draft...
I use the following settings in Config.groovy:
grails.cache.enabled = true
grails.cache.clearAtStartup = true
grails.cache.config = {
defaults {
maxElementsInMemory 10000
overflowToDisk false
maxElementsOnDisk 0
eternal true
timeToLiveSeconds 0
}
cache {
name 'somecache'
}
}
Then, in the service I use something like:
#Cacheable(value = 'somecache', key = '#p0.id.toString().concat(#p1)')
def serviceMethod(Domain d, String s) {
// ...
}
Notice the somecache part is reused. Also, it was important to use String as key in my case. That's why I used toString() on id.
The plugin can be also set up to use disk storage, but I don't use it.
If it doesn't help, please provide more details on your issue.
This may not help, but if you upgrade the application to Grails 2.4.x you can use the #Memoize annotation. This will automagically cache the results of each method call based upon the arguments passed into it.
In order to store this "almost static" information you could use Memcached or Redis as a cache system. (There are many others)
This two cache systems allows you to store key-value data (in your case something like this "key_GET": JSON,XML,MAP,String ).
Here is a related post: Memcached vs. Redis?
Regards.

Ad Asp.Net Changing Passwords

We're using ASP.NET MVC and AdMembership provider for login, and for various reasons had to implement our own "Change Password on next Login" functionality.
We also have a nist requirement of not allowing more than one change per 24 hours. so it's set up that way in AD.
What we need is to Ignore that one requirement when resetting a password to default, we want the student to be forced to change the password on the next logon, even if it's before 24 hours.
here is my stab at it. Basically I want to change the PwdLastSet property to a value more than 24 hours old after we reset the password.
if ( bSetToDefault )
{
var adDate = userToActOn.ADEntry.Properties[ "PwdLastSet" ][ 0 ];
DateTime passwordLastSet = DateTime.FromFileTime( ( Int64 ) adDate );
passwordLastSet = System.DateTime.Now.AddHours( -25 );
long filetime = passwordLastSet.ToFileTimeUtc();
userToActOn.ADEntry.Properties[ "PwdLastSet" ][ 0 ] = filetime;
}
But I keep getting null back even when I know the users password has been changed.
anyone got any hints or suggestions? Am I looking in the wrong property?
hmm this attribute is replicated so should always be available.
Try the command line script to see if it shows up:
http://www.rlmueller.net/PwdLastChanged.htm
Its possible because its a 64bit date and not doing a conversion? Try the script though and see if it works. if it does, then look at the Integer8Date procedure in it for your date conversion.
If you use System.DirectoryServices.AccountManagement then there is an exposed method for the User Principal to expire the password immediately. So it will be as easy as calling it like such oUserPrincipal.ExpirePasswordNow(); for more info about using it please see this article.

Resources