web server memory leak issue - memory

i'm create a dating website using symfony 1.4 (it's my first project using symfony). the problem is there server freezes if there is only 10 or less users online. i tryied optimizing my js, css, sprites using yslow i got grade A but still the problem is always there. that's why i think the way i build the application might be wrong so here is the website naijaconnexion.com i'm asking u for advices and things to do so i overcome this problem
If i wasn't clear enough just ask, if you want cpanel admin access i'll post it
i realy realy needs your help
for instance i have this code on my home page action does it seems ok or it needs to be optimized and how
$this->me = $this->getUser()->getGuardUser()->getPerson();
$this->cities = Doctrine_Core::getTable('City')->findByDql("zipcode=''");
$this->countries = Doctrine_Core::getTable('City')->findByDql("zipcode='10'");
$this->contacts = $this->me->getContacts();
$this->favorites = $this->me->getFavorites();
$this->matches = $this->me->getMatches();
$this->pager = new sfDoctrinePager('Conversation', sfConfig::get('app_home_conversations_per_page'));
$this->pager->setQuery($this->me->getConversationsQuery());
$this->pager->setPage($request->getParameter('page', 1));
$this->pager->init();
without HYDRATE_ARRAY
i can do this
if i use HYDRATE_ARRAY
will i be able to do stuff like $this->contacts[0]['username'];
help please

The problem is doctrine...
Try to fetch the needed values as array and set a limit!
How many objects are returned by the following queries:
$this->cities = Doctrine_Core::getTable('City')->findByDql("zipcode=''");
$this->countries = Doctrine_Core::getTable('City')->findByDql("zipcode='10'");
$this->contacts = $this->me->getContacts();
$this->favorites = $this->me->getFavorites();
$this->matches = $this->me->getMatches();
? Do not forget that they are object with references to other objects!
And yes, this will work $this->contacts[0]['username'];.
if you join the tables with the doctrine query, you can access related entities too - without executing additional queries.
$this->contacts = Doctrine_Core::getTable('Contact')
->createQuery('c')
->leftJoin('c.Users u')
->addWhere('...')
->execute(array(), Doctrine_Core::HYDRATE_ARRAY);
$this->contacts[0]['Users'][0]['username']

Related

Link encryption?

I have been stuck on a problem for a few hours. Nothing online has helped and I'm losing the will to live right now.
The site loads up a question with no hints and asks you to find a secret code.
Here's the brief explanation of it:
'Well done on making it to the secret bonus challenge! Our agents have been struggling to deal with a hacker obsessed with clocks and timing. He set up an elaborate collection of pages with content that changes based on a timer. We've replicated it below, can you figure out how to get the secret code?'
There are many links inside this challenge and when they are clicked it opens to a new website and has pseudo strings in there, I don't see much pattern. Links below:
https://assess.joincyberdiscovery.com/challenge-files/clock-pt1?verify=BY%2F8lhw%2BtbBgvOMDiHeB5A%3D%3D
https://assess.joincyberdiscovery.com/challenge-files/clock-pt2?verify=BY%2F8lhw%2BtbBgvOMDiHeB5A%3D%3D
https://assess.joincyberdiscovery.com/challenge-files/clock-pt3?
verify=BY%2F8lhw%2BtbBgvOMDiHeB5A%3D%3D
https://assess.joincyberdiscovery.com/challenge-files/clock-pt4?
verify=BY%2F8lhw%2BtbBgvOMDiHeB5A%3D%3D
https://assess.joincyberdiscovery.com/challenge-files/clock-pt5?verify=BY%2F8lhw%2BtbBgvOMDiHeB5A%3D%3D
(If it doesn't allow you to go on) then what it has is just a tag and no element with what it seems a three character code which always ends in 'a' for example 'Aja' and makes a new one every 10 seconds (which is not re-generated client side.)
Anyone have any suggestions to whether or not the link is a hint of encryption or not? I've decrypted it once and it came up with:
'https://assess.joincyberdiscovery.com/challenge-files/clock-pt5?verify=BY/8lhw tbBgvOMDiHeB5A==' which isn't much help.
Anyways, anyone have any suggestions?
Thanks :)
Its not impossible. I have the answer here:
import requests
page1 = "https://assess.joincyberdiscovery.com/challenge-files/clock-pt1?verify=wMHfxKSix2qSPJtLe6U98w%3D%3D"
page1_content = requests.get(page1)
page1txt = page1_content.text
page2 = "https://assess.joincyberdiscovery.com/challenge-files/clock-pt2?verify=wMHfxKSix2qSPJtLe6U98w%3D%3D"
page2_content = requests.get(page2)
page2txt = page2_content.text
page3 = "https://assess.joincyberdiscovery.com/challenge-files/clock-pt3?verify=wMHfxKSix2qSPJtLe6U98w%3D%3D"
page3_content = requests.get(page3)
page3txt = page3_content.text
page4 = "https://assess.joincyberdiscovery.com/challenge-files/clock-pt4?verify=wMHfxKSix2qSPJtLe6U98w%3D%3D"
page4_content = requests.get(page4)
page4txt = page4_content.text
page5 = "https://assess.joincyberdiscovery.com/challenge-files/clock-pt5?verify=wMHfxKSix2qSPJtLe6U98w%3D%3D"
page5_content = requests.get(page5)
page5txt = page5_content.text
code = (page1txt + page2txt + page3txt + page4txt + page5txt)
page6 = "https://assess.joincyberdiscovery.com/challenge-files/get-flag?verify=wMHfxKSix2qSPJtLe6U98w%3D%3D&string="+code
page6txt = requests.get(page6)
print (page6txt.text)
Replace all of the links with the links you are given

Process Maker default tables

I am new to process maker.while I am exploring the process maker I got an employee on boarding process.while understanding the process I understood the dynaforms but I faced problem with trigger forms. In trigger forms they mentioned tables to get data but i didn't find the tables in my work space. Please let me know if anyone know the answer.
Thanks.
##hrUser = ##USER_LOGGED;
//Get image
$sqlSdocument = "SELECT *
FROM APP_DOCUMENT D, CONTENT C
WHERE APP_UID = '".##APPLICATION."'
AND D.APP_DOC_UID = C.CON_ID
AND C.CON_CATEGORY = 'APP_DOC_FILENAME'
ORDER BY APP_DOC_CREATE_DATE DESC";
$resSdocument = executeQuery($sqlSdocument);
$dirDocument = $resSdocument[1]['APP_DOC_UID'];
$httpServer = (isset($_SERVER['HTTPS'])) ? 'https://' : 'http://';
##linkImage =
$httpServer.$_SERVER['HTTP_HOST']."/sys".SYS_SYS."/".SYS_LANG."/".SYS_SKIN."
/cases/cases_ShowDocument?a=".$dirDocument;
Judging by the your code, you want to query the uploaded file but base on the commend, you want to get the image?
Well if employee onboarding is what you want, I am guessing that you uploaded the picture on the first dynaform and you want it to be shown. Well this is your luck because ProcessMaker Documentation already have that
If you want a code that let you see the image immediately after upload, you might want to signup for a enterprise trial and test their Employee Onboarding.

getAdGroupBidLandscape returning no campaigns found

I'm trying to use the Google AdWords bid simulator system to try and get some insights out of the AdWords bid simulator. More specifically I'm using the AdGroupBidLandscape() functionality, but it's returning 'No Campaigns Found', but we definitely have campaigns where the Bid Simulator tool works through the AdWords web page interface, so I'm a bit confused. Here is the code I'm running, and yes I know I'm only retrieving a single field - I'm just trying to keep things as simple as possible.
from googleads import adwords
import logging
import time
CHUNK_SIZE = 16 * 1024
PAGE_SIZE = 100
logging.basicConfig(level=logging.INFO)
logging.getLogger('suds.transport').setLevel(logging.DEBUG)
adwords_client = adwords.AdWordsClient.LoadFromStorage()
dataService = adwords_client.GetService('DataService', version='v201710')
offset = 0
selector = {'fields':['Bid'], #'impressions', 'promotedImpressions', 'requiredBudget', 'bidModifier', 'totalLocalImpressions', 'totalLocalClicks', 'totalLocalCost', 'totalLocalPromotedImpressions'],
'paging': {
'startIndex': str(offset),
'numberResults': str(PAGE_SIZE)
}
}
more_pages = True
while more_pages:
page = dataService.getAdGroupBidLandscape(selector)
# Display results.
if 'entries' in page:
for campaign in page['entries']:
print ('Campaign with id "%s", name "%s", and status "%s" was '
'found.' % (campaign['id'], campaign['name'],
campaign['status']))
else:
print 'No campaigns were found.'
offset += PAGE_SIZE
selector['paging']['startIndex'] = str(offset)
more_pages = offset < int(page['totalNumEntries'])
time.sleep(1)
We have several different accounts attachd to AdWords. My account is the only one that has developer API access, so I sort of wonder if the problem is that my account isn't the primary account associated with the campaigns- I just have one of the few administrator accounts. Can anyone provide some insights about this for me?
Thanks,
Brad
The solution I found to this problem was to add a predicate to the selector specifying a particular CampaignId. While that doesn't make any sense to me that it would fix it, because it should really just be filtering the data with that if I understand things correctly, it seems to have. I don't have a good explanation for that, but I thought someone else might find this useful. If I come to realize this wasn't the fix to the problem I had, I will come back and update this answer.

Grails 3 Entity not saved when properties set in EntityClass

I occure a problem which I do not understand. Following code does not work:
AccountingEntity accountingEntity = AccountingEntity.get(params.id);
accountingEntity.setLifecycleStatusToArchived();
accountingEntity.save(flush:true);
Where the method setLivecylceStatusToArchived looks like:
void setLifecycleStatusToArchived() {
this.lifecycleStatus = AccountingEntity.LIFECYCLE_ARCHIVED; //predefined static variable
this.considerForRankingJob = false;
this.dateArchived = new Date();
}
Problem is, that the entity is not updated.
No validation erros when I use accountingEntity.validate() in advance.
However, this code works:
AccountingEntity accountingEntity = AccountingEntity.get(params.id);
accountingEntity.setDateArchived(new Date());
accountingEntity.setConsiderForRankingJob(false);
accountingEntity.setLifecycleStatus(AccountingEntity.LIFECYCLE_ARCHIVED);
accountingEntity.save(flush:true);
The code did not work any more after update from Grails 3.2.9 to 3.3.0.RC1 (Gorm 6.1.5) unless I followed all the steps in the guide (http://docs.grails.org/3.3.x/guide/upgrading.html) and the rest of the code is working properly (also database accesses etc.)
Has anybody an idea? What the problem could be?
Thanks in advance and best regards!
The short answer is dirty checking. When you are setting properties inside the instance method Grails doesn't know they are dirty.
See the following github issue for how to resolve the problem:
https://github.com/grails/grails-data-mapping/issues/961
you have 2 options:
call markDirty every time you change an internal field. This will be
better for performance or as per
http://gorm.grails.org/latest/hibernate/manual/index.html#upgradeNotes
use
hibernateDirtyChecking: true

DataNucleus Memory/Cache Handling for large update/insert

We are running application in Spring context using DataNucleus as our ORM mapping and mysql as our database.
Our application have a daily import job of some data feed into our database. The size of the data feed translate into around 1 millions row of insert/update. The performance of the import start out to be very good but then it degrade overtime (as the number of executed query increase) and at some point the application freeze or stop responding. We will have to wait for the whole job to complete before the application response again.
This behavior looks very like a memory leak to us and we have been looking hard at our code to catch any potential problem, however the problem didn't go away. One interesting thing we found from the heap dump is that org.datanucleus.ExecutionContextThreadedImpl (or the HashSet/HashMap) hold 90% of our memory (5GB) during the import. (I have attahed the screenshot of the dump below). My research on the internet said this reference is the Level1 Cache (not sure am i correct). My question is during a large import, how can i limit/control the size of the level1 cache. May be ask DN to not cache during my import?
If that's not the L1 cache, what's the possible cause of my memory issue?
Our code use a transaction for every insert to prevent locking of large chunk of data in the database. It's call the flush method every 2000 insert
As a temporary fix, we moved our import process to run overnight when no one is using our app. Obviously, this cannot go on forever. Please could someone at least point us in the right direction so that we can do more research and hoping we can find a fixes.
Would be good if someone have knowledge of decoding the heap dump
Your help would be very much appreciated by all of us here. Many thanks!
https://s3-ap-southeast-1.amazonaws.com/public-external/datanucleus_heap_dump.png
https://s3-ap-southeast-1.amazonaws.com/public-external/datanucleus_dump2.png
Code Below - Caller of this method does not have a transaction. This method will process one import object per call, and we need to process around 100K of these object daily
#Override
#PreAuthorize("(hasUserRole('ROLE_ADMIN')")
#Transactional(propagation = Propagation.REQUIRED)
public void processImport(ImportInvestorAccountUpdate account, String advisorCompanyKey) {
ImportInvestorAccountDescriptor invAccDesc = account
.getInvestorAccount();
InvestorAccount invAcc = getInvestorAccountByImportDescriptor(
invAccDesc, advisorCompanyKey);
try {
ParseReportingData parseReportingData = ctx
.getBean(ParseReportingData.class);
String baseCCY = invAcc.getBaseCurrency();
Date valueDate = account.getValueDate();
ArrayList<InvestorAccountInformationILAS> infoList = parseReportingData
.getInvestorAccountInformationILAS(null, invAcc, valueDate,
baseCCY);
InvestorAccountInformationILAS info = infoList.get(0);
PositionSnapshot snapshot = new PositionSnapshot();
ArrayList<Position> posList = new ArrayList<Position>();
Double totalValueInBase = 0.0;
double totalQty = 0.0;
for (ImportPosition importPos : account.getPositions()) {
Asset asset = getAssetByImportDescriptor(importPos
.getTicker());
PositionInsurance pos = new PositionInsurance();
pos.setAsset(asset);
pos.setQuantity(importPos.getUnits());
pos.setQuantityType(Position.QUANTITY_TYPE_UNITS);
posList.add(pos);
}
snapshot.setPositions(posList);
info.setHoldings(snapshot);
log.info("persisting a new investorAccountInformation(source:"
+ invAcc.getReportSource() + ") on " + valueDate
+ " of InvestorAccount(key:" + invAcc.getKey() + ")");
persistenceService.updateManagementEntity(invAcc);
} catch (Exception e) {
throw new DataImportException(invAcc == null ? null : invAcc.getKey(), advisorCompanyKey,
e.getMessage());
}
}
Do you use the same pm for the entire job?
If so, you may want to try to close and create new ones once in a while.
If not, this could be the L2 cache. What setting do you have for datanucleus.cache.level2.type? It think it's a weak map by default. You may want to try none for testing.

Resources