Running 2 instances of Adobe Analytics in parallel with DTM - adobe-analytics

I'm trying to migrate my Analytics implementation to DTM by building a second instance and have the old and the new one run in parallel for a while before disabling the old one.
When I browse the site in with DTM staging mode enabled to test the setup, I get collision issues between the two instances. The alternate tracking code variable I specified for the new version is not being defined. Instead of getting a server call for each instances, i get 2x calls from the old one, with a mix of variables values from both designs.
All the setup so far is done throught the DTM interface, no custom code
Adobe Analytics tool library config:
Code configuration = custom
Code Hosted = In DTM
Tracker Variable Name = s2 (old one uses "s")
The old instance is configured through AEM5.6.1 with H.25
the new one use whatever DTM includes by default
What would be the way to dissociate the 2 instances?

Related

How to bring complex Application Insights to a Farmer deployment?

I got interested in Farmer and decided to try it out in my project. I managed to replace most of the ARM template with Farmer.
However, there are Application Insights left, as I have quite a complicated setup there, including some alerts, scheduled query rules and so on. Whereas everything that Farmer currently supports for AI is just name, IP masking and sample percentage.
How can I plug in AI setup into the Farmer so that I don't reject Farmer just because of that part? My service looks like this:
let webApp = webApp {
name appName
service_plan_name servicePlanName
sku WebApp.Sku.B1
always_on
...
}
So webApp setup has a builder keyword for this, link_to_unmanaged_app_insights:
Instructs Farmer to link this webapp to an existing app insights instance that is externally managed, rather than creating a new one.
However there are no examples and only one test using it, so after some experimenting, this is what proved to work:
Keep having AI setup in ARM in the source code, e.g. arm-template-ai.json.
Note the AI resource Id.
Use the aforementioned keyword in the F# app setup:
let webApp = webApp {
name appName
service_plan_name servicePlanName
sku WebApp.Sku.B1
always_on
link_to_unmanaged_app_insights (ResourceId.create appInsightsName)
}
In the release pipeline, first deploy AI from the AI ARM template in the source code.
Then deploy all other resources from the ARM template generated by Farmer.
An example how it can look in Azure DevOps:

is it possible to set the region Google Cloud Dataflow uses at a project or organisation level?

My employers recently started using Google Cloud Platform for data storage/processing/analytics.
We're EU based so we want to restrict our Cloud Dataflow jobs to stay within that region.
I gather this can be done on a per job/per job template basis with --region and --zone, but wondered (given that all our work will use the same region) if there's a way of setting this in a more permanent way at a wider level (project or organisation)?
Thanks
Stephen
Update:
Having pursued this, it seems that Adla's answer is correct, though there is another workaround (which I will respond with). Further to this, there is now an open issue with google this now which can be found/followed at https://issuetracker.google.com/issues/113150550
I can provide a bit more information on things that don't work, in case that helps others:
Google support suggested changing where dataprep-related folders were stored as per How to change the region/zone where dataflow job of google dataprep is running - unfortunately this did not work for me, though some of those responding to that question suggest it has for them.
Someone at my workplace suggested restricting Dataflow's quotas for non-EU regions here: https://console.cloud.google.com/iam-admin/quotas to funnel it towards using the appropriate region, but when tested Dataprep continued to favour using US.
Cloud Dataflow uses us-central1 as a default region for each job and if the desired regional endpoint differs from the default region, the region needs to be specified in every Cloud Dataflow command job launched for it to run there. The zone will be automatically assigned workers to the best zone within the region, but you can also specify it with --zone.
As of this moment it is not possible to force the region or zone used by Cloud Dataflow based on the project or organization settings.
I suggest you to request a new Google Cloud Platform feature. Make sure to explain your use case and how this feature would be useful for you.
As a workaround, to restrict the jobs creation on Dataflow for a specific region and zone, you can write a script or application to only create jobs with the specific region and zone you need. If you also want to limit the creation of jobs to be done only with the script, you can remove your users’ job creation permissions and only give this permission to a service account which would be used by this script
A solution Google support supplied to me, which basically entails using Dataprep as a Dataflow job builder rather than a tool in of itself
Create the flow you want in Dataprep, but if there's data you can't send out of region, create a version of it (sample or full) where the sensitive data is obfuscated or blanked out & use that. In my case, setting the fields containing a user id to a single fake value was enough.
Run the flow
After the job has been executed once, in the Dataprep webUI under “Jobs”, using the three dots on the far right of the desired job, click on “Export results”.
The resulting pop up window will have a path to the GCS bucket containing the template. Copy the full path.
Find the metadata file at the above path in GCS
Change the inputs listed in the files to use your 'real' data instead of the obfuscated version
In the Dataflow console page, in the menu to create a job using a custom template, indicated the path copied from 2 as the “Template GCS Path”.
From this menu, you can select a zone you would like to run your job in.
It's not straightforward but it can be done. I am using a process like this, setting up a call to the REST API to trigger the job in the absence of Dataflow having a scheduler of it's own.

Rails 4: How to create free demo version based on original app

I have a web application with Rails 4 where you have to log in to use it. Now I want a demo version of this app. By demo version I mean a version that has all the features of the original app but without the login. And all the demo data should (and can easyliy) be deleted from time to time (either automatically or manually).
With the original app up and running I want to implement the demo version with the least effort. Ideally I can use most of the original code without any changes. But changes to the original code on the other hand will be available in the demo version without any extra work.
My first idea was to implement the demo version just in the cache/session so if the session is expired, the data is deleteded as well. I canceled that idea due to the deep integration of ActiveRecord in the original app. I would have to re-code all the demo classes and/or build some abstract parent classes and so on.
The second idea was to simply use the original app but to add a flag to each demo account so that they can be distiguished from all the regular ones. I hesitate with this idea because I'm afraid to blow up my database (i.e. the tables that I use for the original app) with demo data leading to lower performance and higher cost/risk of wrong interpretations when evaluating the app data (e.g. how many accounts where created yesterday).
Do you have any ideas how to realize such a demo version in an elegant way?
Smart approaches welcome!
You can have a Guest user account, and a before action in ApplicationController that checks if the current application is in demo mode (specifiable through a custom config) and automatically logs in the user.
You can use a cron job to delete the demo data. Whenever is a good solution for managing cron jobs in ruby.
for automated fake data creating use whenever and faker gems. Faker will generate fake data. Whenever for cron job. And after every demo session it will clear the mock data.
take these point : session, cron, fake seed data

How to manage one repository of Xcode iOS app for multiple companies with slighly different requirments?

I have a mobile app developed for number of different companies on AppStroe. Each company is using different endpoint to server/icon/logo. I have managed to add this in custom plist file and according to companies endpoint, I switched to different build setting.
Now these companies are going to different ways of authentication. One is using another app to authenticate and one is using server calls. Also for one company I am receiving datas one from server call and other one from local files.
I have to handle different login behaviour for different projects. It is mostly display/disable some extra views. I don't want to have two repositories or branches. Because almost %85 of functionalities are same. I want to add functionalities same time to both and some times to only one of them and run my tests and all.
I am looking for some way way to manage this app to maintain most functionalities and keep it only one app. How can I do that? Any suggestion?
This is a very old problem. Basically you have two options: build time and run-time; from your description it may be that you need a both (I would not trust configuration to drive my authentication code).
Build time means using conditional compile (e.g. Which conditional compile to use to switch between Mac and iPhone specific code?) and a different build profile for each customer. I assume that Xcode Targets (see How to manage the code of multiple, very similar Xcode projects) allows you to define different build profiles.
Run-time checks maps to Feature Toggling.
I suggest not using version control to manage nuances of the same application because it quickly becomes a merge nightmare, even with Git.

Multiple web-interfaces for same neo4j database

Note: I want solutions only for neo4j community edition, not the enterprise one. Thanks!
I want to use the default web interface http://localhost:7474/browser/ for development and read/write purposes Also, I would like to use another web interface which I will apparently open to public for read purposes, which may go by certain different port say, 8474.
I tried this:
- Used two instances(neo4j folders) - a) read_only = true b) read_only commented out.
- Changed http/https ports for the both to differentiate.
- Changed org.neo4j.server.database.location property in 'read_only' one to point to the location of 'read/write' one.
This doesn't work. Any workaround? I just want two web-interfaces for the same database. One read only. One read/write supported.
Setup a cluster of 3 Neo4j enterprise instances (or 2 instances plus one arbiter) and set read_only=true on one of the instances.
See http://neo4j.com/docs/stable/ha-setup-tutorial.html for detailed setup instructions.

Resources