Trying to create a new context with preivously saved storageStage file, is throwing an error.
Cookie should have a valid expires, only -1 or a positive number for the unix timestamp in seconds is allowed
at captureRawStack (C:\\tmp\\myProject\\node_modules\\playwright-core\\lib\\utils\\stackTrace.js:64:17)
In BeforeAll hook, I'm login to Microsoft Azure AD and the storageState file saved looks pretty.
But creating the context with storageStage option pointing to the saved file result in the error.
I'm not using playwright test runner, I'm integrating playwright with cucumber.
Related
In my web Site I have enabled Failed Request Tracing log feature and configured it to store log files into
the default folder %SystemDrive%\inetpub\logs\FailedReqLogFiles and I have set some failed request tracing rules.
I have granted write permissions to that folder for application pool identity and I have checked that IIS_IUSRS account has write permissions as well on that folder.
My web site is an ASP.NET MVC application that uses Web Garden configuration (application's pool is set to 4 worker processes).
Log files are correctly stored in the folder but I get continuous warning messages in the event log like below:
FailedRequestTracing module failed to write buffered events to log
file for the request that matched failure definition. No logs will be
generated until this condition is corrected. The problem happened at
least %1 times in the last %2 minutes. The data is the error.
It seems like the cause is that more than one worker process is trying to create a file log in the same folder so they are conflicting/collisioning when creating the next filename in sequence with a correlative number as explained here by anilr.
How can I solve this problem in order to avoid warning messages appear constantly in the event log?
Note: I am using IIS 8.5.9600.16384 under Windows Server 2012 R2 Standard.
This event is logged when FailedRequestTracing module failed to write buffered events to log file for the request that matched failure definition. you can try the following steps to solve the problem.
Enable tracing access to the log file directory.
Find the current Failed Request Tracing log file path setting.
Make sure the configured Failed Request Tracing log file directory exists.
Make sure the IIS_IUSRS group has permission to write to the log file directory.
More information about this error you can refer to this link: FailedRequestTracing module failed to write buffered events to log file for the request that matched failure definition.
I have a Node.js app in a Docker container that I'm trying to deploy to Google Cloud Run.
I want my app to be able to read/write files from my GCS buckets that live under the same project, and I haven't been able to find much information around it.
This is what I've tried so far:
1. Hoping it works out of the box
A.k.a. initializing without credentials, like in App Engine.
const { Storage } = require('#google-cloud/storage');
// ...later in an async function
const storage = new Storage();
// This line throws the exception below
const [file] = await storage.bucket('mybucket')
.file('myfile.txt')
.download()
The last line throws this exception
{ Error: Could not refresh access token: Unsuccessful response status code. Request failed with status code 500"
at Gaxios._request (/server/node_modules/gaxios/build/src/gaxios.js:85:23)
2. Hoping it works out of the box after setting the Storage Admin IAM role to my Cloud Run service accounts.
Nope. No difference with previous.
3. Copying my credentials file as a cloudbuild.yaml step:
...
- name: 'gcr.io/cloud-builders/gsutil'
args: ['cp', 'gs://top-secret-bucket/gcloud-prod-credentials.json', '/www/gcloud-prod-credentials.json']
...
It copies the file just fine, but then the file is nor visible from my app. I'm still not sure where exactly it was copied to, but listing the /www directory from my app shows no trace of it.
4. Copy my credentials file as a Docker step
Wait, but for that I need to authenticate gsutil, and for that I need the credentials.
So...
What options do I have without uploading my credentials file to version control?
This is how I managed to make it work:
The code for initializing the client library was correct. No changes here from the original question. You don't need to load any credentials if the GCS bucket belongs to the same project as your Cloud Run service.
I learned that the service account [myprojectid]-compute#developer.gserviceaccount.com (aka "Compute Engine default service account") is the one used by default for running the Cloud Run service unless you specify a different one.
I went to the Service Accounts page and made sure that the mentioned service account was enabled (mine wasn't, this was what I was missing).
Then I went here, edited the permissions for the mentioned service account and added the Storage Object Admin role.
More information on this article: https://cloud.google.com/run/docs/securing/service-identity
I believe the correct way is to change to a custom service account that has the desired permissions. You can do this under the 'Security' tab when deploying a new revision.
I have issues recording a file upload request using JMeter Proxy. The file could be selected with the browse option and on pressing the SAVE or SUBMIT button the page refreshes and goes to the initial state.
I'm working on a load test project on a On-Premise SharePoint(v2013) Website with the following conditions,
Form-based authentication - Login and session cookies are working
correctly with JMeter.
Need to pass few dynamic variables like
__EVENTVALIDATION, __VIEWSTATE, __REQUESTDIGEST,
__VIEWSTATEGENERATOR which needs to be extracted using a RegEx
Extractor in JMeter from every request and needs to be passed to
subsequent POST request as parameters.
As a workaround, I used BlazeMeter Chrome Extension to record the file upload requests and all the API calls could be recorded correctly. But has issues replaying it in JMeter. The file upload fails again even though all the requests pass.
All the file upload POST requests passes with a success response code
of 200 and gives the same HTML content as the response, while replaying it in JMeter, but the file never gets uploaded.
If you are absolutely sure that you have HTTP Cookie Manager in place, correlation is working fine, etc. in order to record the file upload request it should be just enough to put the file to JMeter's "bin" folder so HTTP(S) Test Script Recorder could locate it.
Make sure the file is present in the JMeter's "bin" folder during replay as well. Check out Recording File Uploads with JMeter guide for more comprehensive explanation of the limitation.
If it doesn't help your JMeter configuration is still not correct. The common practice is capturing a request from a real browser and from JMeter using a sniffer tool like Fiddler or Wireshark. This way you will be able to inspect requests on a low level and identify the differences. Once you find out the cause - amend JMeter configuration so the request originating from JMeter would look exactly as the one which comes from the browser.
I have a windows script (vbs) file that uses Microsoft Excel (2010)'s API to create excel files.
Today I needed to migrate the script to a new server running Windows Server 2008 R2.
When I run the script manually it runs perfectly.
When I try to run the same script as a scheduled task, with the same user (who is an administrator on the machine in question), and 'run with highest privileges' checked, it can run up to a point, but fails when it tries to use Excel's api to save a file.
Specifically the script never goes past the wb.saveas...
logfile.writeline ("About to save c:\scripts\" & agentfilename & ".xls")
wb.saveas "c:\scripts\" & agentfilename & ".xls", 56
logfile.writeline (result & " Saved c:\scripts\" & agentfilename & ".xls")
(My log file contains the 'about to save' entry, but not the 'saved' entry.
Note, 'wb' is created earlier in the script as follows - set wb = xl.workbooks.add
One problem here is that I cannot see what error is occuring because the script is being run as a scheduled task.
This ran perfectly on the previous server (Server 2003).
I have UAC turned off completely on the new server.
The script has write access to the folder because it is able to create and append to the log file.
Any ideas?
Edit:
I found out what the problem is.
Apparently the scheduled task can only work if the 'run only when the user is logged in' radio button is checked, because Excel hangs at attempts to save the file if the user is not logged in. Odd that I didn't have this problem on the old server (same copy of excel, uninstalled from the old server and installed on the new)
This is a pain because it means I will need to leave the user logged into the machine for this scheduled task to work. If anyone knows of a way around this limitation I would be greatful.
I found out what the problem is.
Apparently the scheduled task can only work if the 'run only when the user is logged in' radio button is checked, because Excel hangs at attempts to save the file if the user is not logged in.
This is a pain because it means I will need to leave the user logged into the machine for this scheduled task to work.
I'm trying to download my Fortify 360 fpr file through command line so I can automate a process with the following command:
fortifyclient -url [url] -authtoken [token] downloadFPR -file "C:\path\to\local\Fortify.fpr" -projectID [projectID]
The problem is that I am getting the following message when I try this:
Access Denied. Please ensure the requested project exists and the supplied user has appropriate permissions.
I have all permissions needed to upload/download the fpr through the web UI and I've been able to successfully upload the fpr from command line, it's just downloading it that I'm having a problem with.
I figured out my issue was related to my token. I had generated my token using AnalysisUploadToken for my gettoken argument and that only allows uploading. I had to create another token using AnalysisDownloadToken in order to upload it.