Not sure if someone else came across a similar issue before. When I create a remote config parameter and choose it as a boolean with no conditional values. In my iOS project, firebase can read it correctly. However, when I start using conditional statements, where it is checking if the user is in a particular location e.g. UK in the image below.
When setting up this param, I did the following:

Every time I am accessing the “test_country” parameter, it is showing me the default value of false, rather then the expected “true”. I looked through different questions on stack overflow and set
configSettings.minimumFetchInterval = 60
in debug mode to fetch quicker then the recommended 12 hours in production and ever tried another stack overflow recommendation of setting the
remoteConfig.fetch(withExpirationDuration: 0)
to zero, to force fetch from remote . Just to ensure its not a fetching issue.
Any suggestion what could be going wrong? I'm not sure what more information is needed to help in this case, please let me know.
I've looked through the following questions:
FirebaseRemoteConfig fetchAndActivate not updating new value
Firebase Remote Config fetch doesn't update values from the Cloud
And many more, even posted the question on Firebase slack page.
This issue has been resolved. With the information I currently gave, it was probably going to be hard to deduce what went wrong. Apologies for anyone that read this question.
On to the answer. For my app we have a release build and a debug build. Only one firebase to manage them. So, for debug build, we normally turn off:
FIREBASE_ANALYTICS_COLLECTION_DEACTIVATED
For more information on this see this link -> Configure Analytics data collection and usage
This needs to be turned on, in order for Google Analytics offers to control the collection and use Analytics data. Which I believe is required for conditional statements. Especially if the conditional statement is using a custom definition.
Related
This question already has answers here:
Can I get console logs from my iPhone app testers?
(2 answers)
Closed 3 years ago.
I am looking for a way to programatically pull all of my application logs that are shown in console.
.
I DO NOT WANT to just be able to see them, so using xcode as a preview will not work for me.
What I want is my users to be able to send me those logs along with feedback any time since app is in beta phase and plain user explanations are not good enough for proper debugging at my end.
So, what I DO WANT is some iOS analogy for Android's logcat command which is being used somehow like this:
final Process process = Runtime.getRuntime().exec("logcat -d");
final InputStream inputStream = process.getInputStream();
... then you manipulate the stream into whatever you need to do with it, in my case, to create a String object that i would pass on to my log service.
SO FAR in my investigation I was only being able to find this option, but I would appreciate if something easier to integrate into Swift app is available.
Also, os.log module is used for logging, but I wasn't able to find an option where it allows loading the logs. Or my understanding of the following explanation found HERE is not good enough:
Use the Console app or the log command-line tool to view and filter log messages.
EDIT:
END USER SHOULD NOT INTERRACT WITH LOGS in any way other than just clicking the submit button while switch for debug logs is ON.
So #matt NO - this is not the duplicate of the linked issue.
End users should NOT have to download something else too in order to be able to feed me with my own app logs.
That is a classic killer of user experience and should not even be approved as a solver on the linked post either.
To your last point. That explanation is telling you to connect your phone to a mac and use the "Console" application (or command line tool) to extract the console logs e.g. https://support.apple.com/en-ie/guide/console/welcome/mac
The first link you have provided (using ASL) is the only solution I'm aware of to do this natively. 2 other options would be:
Use a server/cloud based analytical/event tracking system to log messages for each user. If someone sends an issue you can search the logs to see what happened. (Be careful about storing personal information though).
You could write your own class / function that takes in a string, and writes it to a text file and logs it to the console. During development you will see it in the console and in the wild it will store it. When you want the details, you could just read the text file and upload it. Need to be careful of it growing too big. And again depending on where its stored, personal data could be an issue.
I've always used option 1. There are many free services to do this and lots of solutions for releasing apps to test users provide this out of the box too. Microsofts AppCenter being one, and I believe is free unless you want CI/CD
We've noticed a very strange behavior change on our website (asp.net MVC) starting early morning (GMT) on the 12'th of January this year (2018).
Http POSTs from the site started firing twice (unconfirmed, but we suspect sometimes more than twice), and scouring high and low we couldn't find that we'd changed anything.
One of the few things we dynamically load is Google Analytics (specifically Google Tag Manager), and in the course of trial-and-error we tried disabling everything external (which made the phenomenon disappear) and then re-enabling them one-by-one, once we came to re-enabling GA the problem re-appeared.
We also tried removing everything except GA and the problem persisted.
When searching we can't find that anything has been updated in GA, so it's very unclear why it suddenly started, and we have also been unable to find anyone else reporting the same problem (either historically or presently).
Our current best guess is that one of GA's dependencies have updated, and either it contains a bug, or it's exposing an already existing fault in our code.
Has anyone else noticed the same problem? Anyone find something in their code that caused the strange behavior of GA?
I found the error, it was caused by two erroneously set up triggers.
Both triggers were form submit type, and both had double activators, one "Activate trigger when all conditions are met" and one "Activate trigger on specific forms".
The problem was that both "all conditions" activators were set to "url matches regular expression .*", where as the second activator for both targeted a correct Form Path for each respectively.
Whoever set it up must have assumed that Google Tag Manager was using a logical "and" between the two activators (not an unrealistic assumption), but based on my testing at least it seems that the trigger activates on either activator matching.
I couldn't see any reason for the first regex match towards ".*", so the fix was to simply supply a unique url expression for each trigger.
No explanation yet as to why it suddenly became a problem, because the configuration has been wrong for a couple of months at least.
P.S. For whatever reason our GTM is not in English, so take my quoted names on fields/etc with a grain of salt as they are translated.
Update
The website uses ajax to post the forms, the combination of that and the "Await tags" flag on the triggers are looking as likely sources of why the combined conditions were not acting as expected.
Which means a non-announced performance-update to GTM regarding "Await tags" could have been the catalyst which caused the problem to start occuring with alarming frequency.
I am using Revulytics SDK to track feature usage and came across the below problem.
I am sending feature usage after properly setting up the SDK configuration etc, using the EventTrack() method like this:
GenericReturn grTest = telemetryObj.EventTrack("FeatureUsage", textBoxName.Text.ToString(), null, false);
This returns OK and usually, I can see the usage data in the dashboard. However, after multiple tests, the data I am sending does not show up on the dashboard.
Can anyone hint me how to debug this? Thanks for any help!
I hit a similar issue when first working with this SDK.
I was able to address this as soon as I understood the following:
There are event quotas for the incoming events;
Event names are used for making the distinction.
So when I was sending dummy test data, it made it there, but when I sent some demo data for stakeholders, it was not showing up.
I think the same happens here. You're getting the event name form textbox.text... Pretty sure that varies every time you run the code.
Here are the things to keep in mind when testing your code:
the server has a mechanism to discard / consider events;
implicitly, it allows first xx events depending on the quota;
if you are sending more than xx events, they will not show up in reports.
So, you must control which events to discard and which to consider (there are a couple of levels you can configure, and based of them you can get the events in various types of reports).
Find the "Tracked Events Whitelist Management". You will be able to control these things form there.
This blog helped me (it is not SDK documentation): https://www.revulytics.com/blog/getting-started-with-usage-intelligence-part2-event-tracking
Good luck!
I am using Geb 0.9.0. I recently found out about Geb's template options. I think that they can be very useful. But I want to use them after changing a few defaults across all my Pages. I want the wait parameter to default to true.
I tried reading the whole documentation for the options but couldn't find anything about how to change the defaults.
It's not currently possible to change the defaults of template options. Please feel free to submit an issue in the tracker if you would like to see it implemented in the future.
Nevertheless even if it was possible, I would suggest not using true as the default value for the wait option because it would make any failure caused by a missing element to be delayed by the amount of time defined in the default waiting preset. Also, resolving definitions for which element is not present will result in a delay.
I have a web test for the following request:
{{WebServer1}}/Listing/Details/{{Active Listing IDs.AWE2_RandomActiveAuctionIds#csv.ListingID}}
And I receive the error message:
The maximum number of unique Web test request URLs to report on has been exceeded; performance data for other requests will not be available
because there are thousands of unique URL's (because I'm testing for different values in the URL). Does anyone know how to fix this?
There are a few features within Visual Studio (2010 and above) that will help with this problem. I am speaking for 2010 specifically, and assuming that the same or similar options are available in later versions as well.
1. Maximum Request URLs Reported:
This is a General Option available in the Run Setting Properties of the load test. The default value set here is 1,000. This default value is usually sufficient... but not always high enough. Depending on the load test size, it may be necessary to increase this. Based on personal experience, if you are thinking about adjusting this value, go through your tests first and get an idea of how many requests you are expecting to see reported. For me, a rough guideline that is helpful:
*number_of_request_in_webtest * number_of_users_in_load_test = total_estimated_requests*
If your load test has multiple web tests in it, adjust the above accordingly by figuring out the number of requests in each indvidual test, sum that value up, and multiply by the number of users.
This option is more appropriate for large load tests that are referencing several web tests and/or have a very high user count. One reference for a problem|resolution use-case of this option is here:
https://social.msdn.microsoft.com/Forums/en-US/ffc16064-c6fc-4ef7-93b0-4decbd72167e/maximum-number-of-unique-web-test-requests-exceeded?forum=vstswebtest
In the specific case mentioned in the originally posted question, this would not resolve the problem entirely. In fact, it could create a new one, where you end up running out of virtual memory. Visual Studio will continue to create new request metrics and counters for every single request that has a unique AWE2_RandomActiveAuctionIds.
2. Reporting Name:
Another option, which #AdrianHHH already touched on, is the "Reporting Names" Option. This option is found in the request properties, inside the web test. It defaults to nothing, which in turn, results in Visual Studio trying to create the name that it will use from the request itself. This behavior creates the issue you are experiencing.
This option is the one that will directly resolve the issue of a new request being reported for every unique request report.
If you have a good idea of the expected number of requests to be seen in the load test (and I think it is a good idea to know this information, for debugging purposes, when running into this exception) a debugging step would be to set the "Maximum Request URLs Reported" DOWN to that value. This would force the exception you are seeing to pop up more quickly. If you see it after adjusting this value, then there is likely a request that is having a new reported value generated each time a virtual user executes the test.
Speaking from experience, this debugging step can save you some time and hair when dealing with requests that contain sessionId, GUID, and other similar types of information in them. Unless you are explicitly defining a Reporting Name for every single request... it can be easy to miss a request that has dynamic data in it.
3. Record Results:
Depending on the request, and its importance to your test, you can opt to remove it from your test results by setting this value to false. It is accessed under the request properties, within the web test itself. I personally do not use this option, but it may also be used to directly resolve the issue you are experiencing, given that it would just remove the request from the report all together.
A holy-grail-of-sorts document can be found on Codeplex that covers the Reporting Name option in a small amount of detail. At the time of writing this answer, the following link can be used to get to it:
http://vsptqrg.codeplex.com/releases/view/42484
I hope this information helps.