Can we move a source from one collector to another in Sumo Logic - sumologic

we have two sumo logic collectors one for QA & the other for Prod.
And we had to create a source for an application in both of there collectors. However, by mistake we created both of them in the same collector (QA).
Now, I want to move the prod source from the QA collector to actual prod collector. Is it possible to achieve this or should I create a new one in prod collector and delete the old one in QA collector?

Related

How do I structure Jobs in Jenkins?

I have been tasked with setting up automated deployment and, after some research, settled on Jenkins to get the job done. Prior to this I had approximately zero knowledge of Jenkins, beyond hearing the name. I have no real knowledge of Devops beyond what I have learnt in the last couple of weeks; no formal training, no actual books, just Google searches.
We are not running a full blown/classic CI/CD process; this is a business decision. The basic requirements are:
Source code will be stored in GitHub.
Pull requests must be peer approved.
Pull requests must pass build/unit/db deploy tests.
Commits to specific branches must trigger a deployment to a related specific environment (Production, Staging or Development).
The basic functionality that I am attempting to support covers (what I currently see as) two separate processes:
On creation of a pull request, application is built, unit tests run, and db deploy tested. Status info must be passed to GitHub.
On commit to one of three specific branches (master, staging and dev) the application should be built, and deployed to one of three environments (production, staging and dev).
I have managed to cobble together a pipeline that does the first task rather well. I am using the generic web hook trigger, and manually handling all steps using a declarative pipeline stored in source control. This works rather well so far and, after much hacking, I am quite happy with the shape of it.
I am now starting work on the next bit, automated deployment.
On to my actual question(s).
In short, how do I split this up into Jobs in Jenkins?
To my mind, there are 1, 2 or 4 Jobs to be created:
One Job to Rule them All
This seems sub-optimal to me, as the pipeline will include relatively complex conditional logic and, depending on whether the Job is triggered by a Pull Request or a Commit, different stages will be run. The historical data will be so polluted as to be near useless.
OR
One job for handling pull requests
One job for handling commits
Historical data for deployments across all environments will be intermixed. I am a little concerned that I will end up with >1 Jenkinsfile in my repository. Although I see no technical reason why I can't have >1 Jenkinsfile, every example I see uses a single file. Is it OK to have >1 Jenkinsfile (Jenkinsfile_Test & Jenkinsfile_Deploy) in the repository?
OR
One job for handling pull requests
One job for handling commits to Development
One job for handling commits to Staging
One job for handling commits to Production
This seems to have some benefit over the previous option, because historical data for deployments into each environment will not be cross polluting each other. But now we're well over the >1 Jenkinsfile (perceived) limit, and I will end up with (Jenkinsfile_Test, Jenkinsfile_Deploy_Development, Jenkinsfile_Deploy_Staging and Jenkinsfile_Deploy_Production). This method also brings either extra complexity (common code in a shared library) or copy/paste code reuse, which I certainly want to avoid.
My primary objective is for this to be maintainable by someone other than myself, because Bus Factor. A real Devops/Jenkins person will have to update/manage all of this one day, and I would strongly prefer them not to suffer from my ignorance.
I have done countless searches, but I haven't found anything that provides the direction I need here. Searches for best practices make no mention on handling >1 Jenkinsfile, instead focusing on the contents of a single pipeline.
After further research, I have found an answer to my core question. This might not be the absolute correct answer, but it makes sense to me, and serves my needs.
While it is technically possible to have >1 Jenkinsfile in a project, that does not appear to align with best practices.
The best practice appears to be to create a separate repository for each Jenkinsfile, which maps 1:1 with a Job in Jenkins.
To support my specific use case I have removed the Jenkinsfile from my main source code repository. I then create 4 new repositories:
Project_Jenkinsfile_Test
Project_Jenkinsfile_Deploy_Development
Project_Jenkinsfile_Deploy_Staging
Project_Jenkinsfile_Deploy_Production
Each repository contains a single Jenkinsfile and a readme.md that, in theory, contains useful information.
This separation gives me a nice view of the historical success/failure of the Test runs as a whole, and Deployments to each environment separately.
It is highly likely that I will later create a fifth repository:
Project_Jenkinsfile_Deploy_SharedLibrary
This last repository would contain pipeline code that is shared amongst the four 'core' pipelines. Once I have the 'core' pipelines up and running properly, I will consider refactoring what I can into this shared library.
I will not accept my own answer at this point, in the hope that more answers are forthcoming.
Here's a proposal I would try for your requirements based on the experience at my last job.
Job1: builds and runs unit tests on every commit on master or whatever your main dev branch is (checks every 20 minutes or whatever suits you); this job usually finds compile and unit test issues very fast
Job2 (optional): run integration tests and various static code checks (e.g. clang-tidy, valgrind, cppcheck, etc.) every night, if the last run of Job1 was successful; this job finds usually lots of things, but probably takes lots of time, so let it run only at night
Job3: builds and tests every pull request for release branches; so you get some info in your pull requests, if its mature enough to be merged into the release branches
Job4: deploys to the appropriate environment on every commit on a release branch; on dev and staging you could probably trigger some more tests, if you have them
So Job1, Job2 and Job3 should run all the time. If pull requests to your release branches are approved by QA (i.e. reviews OK and tests successful) and merged to release branches, the deployment is done by Job4 automatically.
It depends on your requirements and your dev process, if you want to trigger Job4 only manually instead.

Jenkins - Multiple instances

We have around 30 Jenkins installs across our organization, both Windows and Linux. They are all used for different tasks and by different teams (e.g. managing Azure, manipulating data, testing applications etc.)
I have been tasked with looking at whether we could bring these all into one 'Jenkins Farm' but as far as I can see such a thing doesn't exist? Ultimately 'we' want some control and to minimize the footprint of Jenkins. The articles I have found don't recommend using a single Master server (with multiple nodes) because of the following:
No role-based access for projects (affecting other teams code)
Plugins can affect all projects
Single point of failure as there is only one master server
Is it best to leave these on separate servers? Are there any other options?
I believe Role based access for projects is possible using
https://wiki.jenkins.io/display/JENKINS/Role+Strategy+Plugin
However, a single master isn't ideal as you pointed out due to 'Plugins can affect all projects'. Probably best to have separate jenkins master nodes but configure agents such that they can be shared across teams/projects.

Deploying SSAS cube with no changes requires reprocess

I have an Analysis Services Cube in TEST and PROD.
We have recently started using branches in TFS.
After our development sprint and deployments, we will restore a backup of PROD over TEST, with the cubes coming through in their processed state.
However, recently, if we deploy the exact same project back to TEST (so exact same schema as what is deployed), all cubes become unprocessed.
I don't think it ever used to do this. For example, we tried changing a translation, not expecting to have to reprocess the cube and found that the whole thing required processing (which takes many hours).
Any idea why? It feels like SSAS is thinking that this is a new database and recreating it (although the created timestamps on the database are still from many months ago)
Steps to reproduce:
1. Create new branch called 'Branch A' from the master branch (master branch is currently deployed to our Analysis Server).
2. Redeploy this new Branch (even without changes)
3. All cubes now become Unprocessed (and deployed seemed to take longer than normal)
Try to go to the Warehouse Control Web Service, and check the Processing Status, then manually process the data warehouse relational database by following article Manually Process the Data Warehouse and Analysis Services Cube for Team Foundation Server.
To access the Warehouse Control Web Service:
Log on to the application-tier server.
Open Internet Explorer, type the following string in the Address bar, and then press ENTER:
-
http://localhost:8080/tfs/TeamFoundation/Administration/v3.0/WarehouseControlService.asmx
If manually process Data Warehouse doesn't work, try to rebuild the data warehouse by following article Rebuild the Data Warehouse and Analysis Services Cube.

Cannot add queue to existing TFS 2015 Build agent pool

Trying to set up build server after upgrading to TFS 2015.
The way I envision is:
Single Agent Pool, that will have 3 queues:
1. Nightly builds
2. CI builds
3. Gated/validation builds.
Each of them will have some agents, the goal is to have some control, to make sure nightly builds wouldn't consume all agent, so gated queue will always have some available agent.
The problem I have now is when I try to add new queue, the option "Use existing pool" is disabled, I can only add queue with creating new agent pool.
It doesn't work the way you want it to work.
One agent can be a member of one and exactly one agent pool. The agent pool exists at the server level, not the Team Project Collection level.
One agent queue is tied to one and exactly one agent pool. However, agent pools can be referenced by different agent queues across Team Project Collection boundaries.
So, the upshot of this is that you can share your agent pools across multiple team project collections.
In VSTS, the distinction exists but is less relevant -- you can't have multiple Team Project Collections, so an agent pool and an agent queue are more or less equivalent, you just have to manage both of them.
You can use custom Capabilities (on your agents) and Demands (on your build definitions) to ensure that particular agents are always reserved for particular build scenarios.
Of course, task-based builds don't support gated checkin for TFVC yet, so your concern about gated agents always being available is moot, at least for now.
Now that all of that is out of the way, the answer to your question is simple:
Q: I'm trying to create a queue that uses an existing pool, but the
controls are grayed out. Why?
A: On the Create Queue dialog box, you can't use an existing pool if it
is already referenced by another queue. Each pool can be referenced by
only one queue. If you delete the existing queue, you can then use the
pool.
Ref: https://msdn.microsoft.com/en-us/Library/vs/alm/Build/agents/admin

How to cherry pick after having merged several changesets into one

We are using TFS 2010 with the Basic Branch Plan outlined in the Branching Guide on codeplex
for an internal web application. We have just the 3 basic branches = Dev, Main (QA/Testing), and Release (Production).
Because the app is an internal web application, we only support the single production release.
We basically develop locally and once we complete a task (a bug fix or enhancement), we commit it to Dev. We also generally do a Get Latest from Dev every day when we start work to pull down anything checked in by the other developers. After some period of time (usually a week or two), we'll decide we have enough changes to justify updating the QA site and do a Merge All from Dev to Main and then deploy the merged Main branch to a QA server for testing.
QA will then start testing the site, and after they're satisfied, we'll do a Merge All from Main to Release and deploy the merged Release branch to our production server. Sometimes we even wind up doing multiple Dev-to-Main merges before actually merging everything on up to Release.
Anyway, we've been using this strategy for a couple of months now and until recently everything was looking great. We were able to hotfix Release if we ran into some critical problem in production and then just merge it backwards. All was looking good.
Then we ran into something we didn't know how to deal with. We were given the directive of merging ONLY a single code fix on up from Main to Release (without merging everything else in Main). Now since we didn't know this was coming, when the original changeset was merged from Dev to Main, it was merged along with several other changesets. So when I went to merge from Main to Release, the only option I had was for the entire merged changset. I couldn't "drill-down" into the merged changeset and pick just the one original changeset from Dev that I really wanted.
I wound up manually applying the change like a hotfix in Release just to get it out there. But now I'm trying to understand how you prevent a situation like this.
I've read several articles on merging strategy and everything seems to recommend NOT cherry-picking changesets when you go to merge - to simple merge everything available... which makes sense.. but if you always merge multiple changesets (and they become one changeset in the destination branch), then how do you potentially merge only one of the original changesets on up to production if the need arises?
For example, if merging Dev (C1, C2, C3) to Main (becomes C4) - then how to merge only C1 from 'within' C4 on up to Release?
It makes me think we'd be better off merging every single changeset individually from Dev to Main instead of doing several at once. At least then we could easily just take one on up from Main to Release if the need arises.
Any recommendations/life lessons/etc. on handling branching/merging for this specific scenario would be greatly appreciated.
In your scenario you could have done the following:
Rollback C4 in Main (becomes C5, because rollbacks are changesets themselves, which apply inverse changes)
Merge from Dev to Main again, but this time select only C1 (becomes C6 in Main).
Now rollback changesets C5 and C6 again, so you have all changes in Main like before. (becomes C7 in Main).
After this you have the same code base in Main as before and you can now merge C6 (which has only the changes from C1) from Main to Release.
However, to prevent such trouble in future you should really consider merging every single changeset from dev to main separately.
I would not recommend merging every single change-set from dev to main; That would be a bad idea with much additional risk!
but if you always merge multiple changesets (and they become one
changeset in the destination branch), then how do you potentially
merge only one of the original changesets on up to production if the
need arises?
You don't and should not let the need arise.
This is probably not the easy answer that you are looking for, but there really is no easy answer. Merging every single change-set is creating a massive amount of effort to prepare for something that should not be happening anyway. Indeed the process of merging individual change-sets introduces yet more complexity that will, in the end, bit you in the ass when you can't figure out why your software is not working... "dam, I missed change-set 43 out of 50"...
If the result of a bug:
In your scenario it may have been better if you manually re-applied the "fix" to either a "hotfix" branch off of Release or directly to the Release line.
That is just the cost of having bugs slip through to production and I would spend a little time figuring out why this problem got passed QA and how to prevent it in the future.
If the result of an enhancement:
Did your financial (CFO) guys authorise the reduction in quality in production that is a direct result of shipping untested code? I hope that they did as they effectively own the balance statements upon which that software is listed as an organisational asset!
It is not viable to ship only one feature, built and tested with other features, to production without completing your entire regression cycle again.
Conclusion
I would not recommend merging every single change-set or feature from dev to main; That would be a bad idea with much additional risk that should be hi-lighted to the appropriate people!

Resources