hygieia jira collector shows 401 unauthorized - jira

I have installe hygieia components using the hygieia starter repo Here is the docker compose yaml link-https://github.com/hygieia/hygieia-starter-kit/blob/master/hygieia-starter-kit/Dockerfile
I have installed jira collector in one of my server, and it is throwing below error
Unexpected error occurred in scheduled task.
org.springframework.web.client.HttpClientErrorException: 401 Unauthorized
Here is my application.properties
# Database Name
dbname=dashboarddb
# Database HostName - default is localhost
dbhost=XXX
# Database Port - default is 27017
dbport=27016
# MongoDB replicaset
dbreplicaset=[false if you are not using MongoDB replicaset]
dbhostport=[host1:port1,host2:port2,host3:port3]
# Database Username - default is blank
dbusername=dashboarduser
# Database Password - default is blank
dbpassword=dbpassword
# Logging File location
logging.file=./logs/jira.log
# PageSize - Expand contract this value depending on Jira implementation's
# default server timeout setting (You will likely receive a SocketTimeoutException)
feature.pageSize=100
# Delta change date that modulates the collector item task
# Occasionally, these values should be modified if database size is a concern
feature.deltaStartDate=2016-03-01T00:00:00.000000
feature.masterStartDate=2016-03-01T00:00:00.000000
feature.deltaCollectorItemStartDate=2016-03-01T00:00:00.000000
# Chron schedule: S M D M Y [Day of the Week]
feature.cron=0 * * * * *
# ST Query File Details - Required, but DO NOT MODIFY
feature.queryFolder=jiraapi-queries
feature.storyQuery=story
feature.epicQuery=epic
# JIRA CONNECTION DETAILS:
# Enterprise Proxy - ONLY INCLUDE IF YOU HAVE A PROXY
feature.jiraProxyUrl=http://proxy.com
feature.jiraProxyPort=9000
feature.jiraBaseUrl=https://XXX
feature.jiraQueryEndpoint=rest/api/2/
# For basic authentication, requires username:password as string in base64
# This command will make this for you: echo -n username:password | base64
feature.jiraCredentials=XXXX
# OAuth is not fully implemented; please blank-out the OAuth values:
feature.jiraOauthAuthtoken=
feature.jiraOauthRefreshtoken=
feature.jiraOauthRedirecturi=
feature.jiraOauthExpiretime=
#############################################################################
# In Jira, general IssueType IDs are associated to various 'issue'
# attributes. However, there is one attribute which this collector's
# queries rely on that change between different instantiations of Jira.
# Please provide a string name reference to your instance's IssueType for
# the lowest level of Issues (for example, 'user story') specific to your Jira
# instance. Note: You can retrieve your instance's IssueType Name
# listings via the following URI: https://[your-jira-domain-name]/rest/api/2/issuetype/
# Multiple comma-separated values can be specified.
#############################################################################
feature.jiraIssueTypeName=Bug
#############################################################################
# In Jira, your instance will have its own custom field created for 'sprint' or 'timebox' details,
# which includes a list of information. This field allows you to specify that data field for your
# instance of Jira. Note: You can retrieve your instance's sprint data field name
# via the following URI, and look for a package name com.atlassian.greenhopper.service.sprint.Sprint;
# your custom field name describes the values in this field:
# https://[your-jira-domain-name]/rest/api/2/issue/[some-issue-name]
#############################################################################
feature.jiraBugDataFieldName=customfield_10201
#############################################################################
# In Jira, your instance will have its own custom field created for 'super story' or 'epic' back-end ID,
# which includes a list of information. This field allows you to specify that data field for your instance
# of Jira. Note: You can retrieve your instance's epic ID field name via the following URI where your
# queried user story issue has a super issue (for example, epic) tied to it; your custom field name describes the
# epic value you expect to see, and is the only field that does this for a given issue:
# https://[your-jira-domain-name]/rest/api/2/issue/[some-issue-name]
#############################################################################
feature.jiraEpicIdFieldName=customfield_10002
#############################################################################
# In Jira, your instance will have its own custom field created for 'story points'
# This field allows you to specify that data field for your instance
# of Jira. Note: You can retrieve your instance's storypoints ID field name via the following URI where your
# queried user story issue has story points set on it; your custom field name describes the
# story points value you expect to see:
# https://[your-jira-domain-name]/rest/api/2/issue/[some-issue-name]
#############################################################################
feature.jiraStoryPointsFieldName=customfield_10003
#############################################################################
# In Jira, your instance will have its own custom field created for 'team'
# This field allows you to specify that data field for your instance
# of Jira. Note: You can retrieve your instance's team ID field name via the following URI where your
# queried user story issue has team set on it; your custom field name describes the
# team value you expect to see:
# https://[your-jira-domain-name]/rest/api/2/issue/[some-issue-name]
#############################################################################
feature.jiraTeamFieldName=
# Defines how to update features per board. If true then only update based on enabled collectorItems otherwise full update
feature.collectorItemOnlyUpdate=true
#Defines the maximum number of features allow per board. If limit is reach collection will not happen for given board
feature.maxNumberOfFeaturesPerBoard=1000
# Set this to true if you use boards as team
feature.jiraBoardAsTeam=false
#Defines the number of hours between each board/team and project data refresh
feature.refreshTeamAndProjectHours=3

Related

Where should data validation occur in Ruby on Rails with multiple tables at the same time?

I'm using RoR and was stuck by how to properly and elegantly validate data from front end. My server is only an API-based RoR app.
There are 4 tables relevant to this issue, identity_types to store the type of identity, identities to store user identities, users to store password digest and other info, levels to store user level such as worker, manager and so on.
For the sake of neatness, I use pseudo code in YAML to describe them with common fields omitted, like id, create_at and etc.
identity_types:
name: [string, unique]
# e.g., name = email/telephone/nickname and etc.
identities:
content: [string, unique]
user_id: [integer, not_null]
identity_type_id: [integer, not_null]
# with combined unique contraint for [user_id, identity_type_id]
users:
password_digest: [string]
supervisor_id: [integer, not_null]
level_id: [integer, not_null]
banned: [boolean]
levels:
name: [string, unique]
# e.g., name = worker/manager/director and etc.
Respectively, I've got models of them in ActiveRecord, relevant migrations, as well as their controllers. I obey the rules of RESTful, and direct 'POST /users' to 'users#create'.
Now, here comes a question: to create a new worker with following roles.
A worker's supervisor can only be a manager.
Content of identity can't be duplicated.
What I've done currently is to put many logic into controller, not in model.
# controllers/users_controller.rb
class UsersController < ApplicationController
# params_from_fe = %i[identity_content identity_type_uuid password password_confirmation supervisor_uuid]
def create
# 1. check if required parameters are presented
# 2. find_by identity_content and check if content is duplicated
# 3. find_by supervisor_uuid and check if supervisor.level is manager
# 4. Nothing done with password + password_confirmation, since they are processed by model layer.
# if all checks passed:
# user = User.create
# user.identity.create
# other code ...
end
end
Among the checking operations above, I've got unique contraint for identity content, so the 2nd step might be left out. What about the 3rd step to verify whether a superviosr is a manager in model layer, the ActiveRecord?
Thanks.

create node attribute/return value in custom resource for chef

I have a chef resource that needs to return a version. I looked up and fund the best way to publish it as a node attribute. Here is the resource code(dj_artifactory_version) :
require "open-uri"
require "json"
def whyrun_supported?
true
end
def get_version(name, user, pass, type, organization, art_module, repos, version)
if (type.match(/snapshot$/i) and version.match(/latest$/i))
string_object = open("https://artifactory.io/artifactory/api/search/versions?g=#{organization}&v=*.*.*&a=#{art_module}&repos=#{repos}", :http_basic_authentication=>["#{user}", "#{pass}"], :ssl_verify_mode=>OpenSSL::SSL::VERIFY_NONE)
json_file = JSON.parse(string_object.read)
version_array = Array.new
json_file["results"].each do |version|
version_array.push(version["version"])
end
unique_versions=(version_array.uniq).max
node.set['artifact']['snapshot']['latest'] = unique_versions
Now I use this chef resource in my recipe to get the version :
dj_artifactory_version "test" do
type "snapshot" # options - snapshot/release
organization "djcm.billing.api.admin" # layout.organization in artifactory properties.
modules "paypal" # layout.properties in artifactory properties.
repos "djcm-zip-local" # repository name in artifactory
version "latest" #latest/oldest
end
p "#node{['artifact']['snapshot']['latest']}"
I create default['artifact']['snapshot']['latest'] in default.rb with a value but here even after I run my recipe the old value doesn't change. Interestingly when I print the same in my resource, it print the node with the new value.
What am I doing wrong and is there a better way to publish a value using your own resource ?
Chef resources do not have return or output values. The problem you are hitting more specifically is that Chef is a two-pass system so the p call is happening before the resource action happens. You likely need to totally rethink this code. get_version should probably be a library helper method, not a resource, but it's hard to say without seeing the rest of the code.

Capistrano, Rails, PostgreSQL Master-slaves replications making it hard to add additional tables to slaves

I have build a document-management system which consist of a content-management Rails application (for writing and managing docs, users and other content) and an End-User Rails application (for reading, searching etc.).
The content-management app is on a server for it self and I have a couple of end-user servers to ensure high availability (a requirement). The end-user servers acts as slaves to the content-management server (PostgreSQL master/slave replication).
This works great up until now that a new feature is required. End users should be able to generate pdfs of containing user selected documents. Now this in it self is no problem - the system should handle a large amount of docs so I've added a Sidekiq worker to do the PDF generation. But here comes the tricky part:
How can I add state to my end-user apps so that I can inform the user when the pdf is finished generating. In a perfect world I'd add a new model, say PDFPrintJob, which has a status attribute which I could inspect via a controller show action. But the problem is, that all end-user apps are on a read only database due to the master/slave situation.
So how should I fix this? Is there a better way of structuring the servers that would enable me to have additional tables on the slaves which are writeable?
I would be happy if the database servers could do the heavy lifting of keeping content synced.
I'm running Rails4 on JRuby 1.7.x and PostgreSQL 9.2
Thanks a bunch
Store the status of the job on Sidekiq with a plugin like sidekiq-status. At the very minimum you could do some polling in JavaScript to grab the status of your job. If you want to get fancy you could do server-side events using Rails 4.
Here's the portion that shows how to use the store and retrieve part of each Sidekiq job.
https://github.com/utgarda/sidekiq-status
Retrieving status
Query for job status any time later:
job_id = MyJob.perform_async(*args)
# :queued, :working, :complete or :failed , nil after expiry (30 minutes)
status = Sidekiq::Status::status(job_id)
Sidekiq::Status::queued? job_id
Sidekiq::Status::working? job_id
Sidekiq::Status::complete? job_id
Sidekiq::Status::failed? job_id
Tracking progress, saving and retrieveing data associated with job
class MyJob
include Sidekiq::Worker
include Sidekiq::Status::Worker # Important!
def perform(*args)
# your code goes here
# the common idiom to track progress of your task
at 5, 100, "Almost done"
# a way to associate data with your job
store vino: 'veritas'
# a way of retrieving said data
# remember that retrieved data is always is String|nil
vino = retrieve :vino
end
end
job_id = MyJob.perform_async(*args)
data = Sidekiq::Status::get_all job_id
data # => {status: 'complete', update_time: 1360006573, vino: 'veritas'}
Sidekiq::Status::get job_id, :vino #=> 'veritas'
Sidekiq::Status::num job_id #=> 5
Sidekiq::Status::total job_id #=> 100
Sidekiq::Status::message job_id #=> "Almost done"
Sidekiq::Status::pct_complete job_id #=> 5

Mongoid document Time to Live

Is there a way to set time to live for a document and then it gets destroyed. I want to create guest users that are temporary per session, so after a week the document gets removed automatically.
MongoDB (version 2.2 and up) actually has a special index type that allows you to specify a TTL on a document (see http://docs.mongodb.org/manual/tutorial/expire-data/). The database removes expired documents for you--no need for cron jobs or anything.
Mongoid supports this feature as follows:
index({created_at: 1}, {expire_after_seconds: 1.week})
The created_at field must hold date/time information. Include Mongoid::Timestamps in your model to get that for free.
UPDATE:
If you want to expire only a subset of documents, then you can create a special date/time field that is only populated for that subset. Documents with no value or a non-date/time value in the indexed field will never expire. For example:
# Special date/time field to base expirations on.
field :expirable_created_at, type: Time
# TTL index on the above field.
index({expirable_created_at: 1}, {expire_after_seconds: 1.week})
# Callback to set `expirable_created_at` only for guest roles.
before_create :set_expire, if: "role == :guest"
def set_expire
self.expirable_created_at = Time.now
return true
end
First you should add include Mongoid::Timestamps to your model.
Second you should add a cron job or a worker of some sort that will run (if you don't want perhaps you can use this gem https://github.com/daddye/foreverb)
And then you can easily set up a check for the gem to see
if model.created_at > 1.week.ago
model.destroy
end

How to exclude DBGrid.Column.FieldName in .pot file

I made an application with Delphi 6.
After that I extracted a .pot file with all the strings to translate.
The problem is that there are strings that don't have to be tranlated, and if translated will generate problems.
Une of this is TDBGrid.Columns[x].FiedlName
I tryed to put this lines into the ggexclude.cfg file, but they doesn't work.
# exclude all occurences of the specified class
# and property in all DFM files in or below the
# path where "ggexclude.cfg" is in
[exclude-form-class-property]
TDBGrid......FieldName
TDBGrid.....FieldName
TDBGrid....FieldName
TDBGrid...FieldName
TDBGrid..FieldName
TDBGrid.FieldName
item.FieldName
TDBGrid.Columns.FieldName
TDBGrid.Columns.TDBGridColumns
TDBGrid.Columns.TDBGridColumns.FieldName
TDBGrid.Columns.Item.FieldName
TColumn.FieldName
TDBGridColumns.FieldName
FieldName
*.FieldName
I think that the problem is that within the .dfm file the parser doesn´t understand that they are part of a TColumn object
inherited DBGTable: TDBGrid
Height = 309
DataSource = DMUsers.DSUser
Columns = <
item
Expanded = False
FieldName = 'USER'
Visible = True
end
item
Expanded = False
FieldName = 'CODE'
Width = 31
Visible = True
end
item
Expanded = False
FieldName = 'NAME'
Width = 244
Visible = True
end>
end
Does anybody have a workarround?
I can't trust the automatic ignore.pot, because there are some strings that cause false possitives.
The documentation to the ggexclude.cfg-File states that you cannot access items, that are part of a collection:
A special case are collections in forms (like TDBGridColumns in a
TDBGrid [...]) You can exclude only the whole collection, but not
certain properties of a collection.
So the workaround would be to exclude the whole Columns-Collection:
TDBGrid.Columns
But this way you will lose Title.Caption too.
The only other workaround I see, would be to modify dxgettext. The following would be nice to have IMHO:
[always-exclude-property]
FieldName
Edit: I wanted to link to the ggexclude.cfg-documentation, but I cannot find it online right now. So I feel free to post the documentation as it is saved in my own ggexclude.cfg-file - but without any guarantee:
# Text in square brackets, like "[exclude-dir]", is called a "section".
# Each line that is not empty, not a comment and not a section holds
# exactly 1 "value".
# All lines below a section are scanned for values belonging to that
# section until the next section starts. You can use the same section
# several times. It will all be added up.
[exclude-dir]
# This section prevents a whole folder and all it's subfolder from being scanned.
# Each value is exactly one folder. On Windows, it's not case-sensitive.
# You can use relative or absolute paths. No wildcards allowed.
# example:
# subfolder
# these are valid values as well:
# another\folder
# another\folder\
# Windows: D:\yet\another\folder
# Linux: /home/zaphod/projects/subfolder/
# You don't have to worry about the path delimiters, both "/" and "\"
# can be used. They are converted to "/" internally
[exclude-file]
# This section prevents a whole file from being scanned.
# Each value is exactly one file. On Windows, it's not case-sensitive.
# You can use relative or absolute paths. Wildcards allowed.
# example:
# Unit4.dfm
# Using the wildcard ".*" for a file extension means that the following
# matching Delphi-files will be excluded: dfm, xfm, pas, inc, rc, dpr:
# Unit5.*
# If Unit3 is already excluded by the [exclude-dir] above, because it
# is located in a subfolder of "subfolder", listing it here therefore
# has not further effect:
# subfolder\subfolder\Unit3.dfm
# you can use absolute paths as well, like this:
# on Windows: D:\test\Unit.pas
# on Linux: /home/zaphod/projects/MainForm.*
[exclude-form-class-property]
# This section prevents a certain property of a class to
# be excluded from scanning in all forms of the folder and subfolders
# where "ggexclude.cfg" is located.
# The format for a value is "Classname.Properyname". It's not case-sensitive. No wildcards allowed.
# Classname is obvious, the propertyname has to be written the way it
# is written in the form file. If you're in doubt about how a certain property
# has to be stored here, just copy and paste the line from the DFM file here and
# put the classname before that.
# For simple strings the property name is one word:
# TLabel.Caption
# ...and for TStrings it's like this:
# TListbox.Items.Strings
# TMemo.Lines.Strings
# TQuery.SQL.Strings
# TEdit is listed in the [exclude-form-class]-section below which means
# that the whole class will be excluded. Listing TEdit.Text here therefore
# has no further effect
# TEdit.Text
# A special case are collections in forms (like TDBGridColumns in a TDBGrid,
# TParams in a TQuery or TActionManager.ActionBars). You can exclude only
# the whole collection, but not certain properties of a collection. That
# means as well that in the case of nested collections (see TActionManager.ActionBars
# in the sample unit "nestedcollections.dfm"), everything that appears below
# the collection with the highest level will be ignored.
# Note that some collections are saved with another name than their propertyname.
# For example: "TQuery.Params" will be saved as "ParamData" in the form file.
# TQuery.ParamData
# TDBGrid.Columns
# TActionManager.ActionBars
# these lines won't work:
# TDBGrid.Columns.Title.Caption
# TActionManager.ActionBars.ContextItems
# ("ContextItems" is a nested collection, which can hold another nested collection and so on)
[exclude-form-class]
# This section prevents a whole class to
# be excluded from scanning in all forms of the folder and subfolders
# where "ggexclude.cfg" is located.
# The format for a value is just "Classname". It's not case-sensitive.
# A wildcard "*" can be used optionally.
# A special case are collections, see [exclude-form-class-property] for that
# Here, everything of TEdit in DFM/XFM-files will be ignored. Remember:
# other classes derived from TEdit have to listed seperatly in order to
# exclude their properties as well. Inheritance is not recognized by dxgettext:
# TEdit
# Visual containers like TPanel or TScrollbox have to be treated slightly different.
# If you have a TPanel with a TLabel on it, writing "TPanel" would only
# exclude the properties of TPanel itself. If you want to exclude
# everything contained in a TPanel, use the wildcard "*" at the end, like this:
# TPanel*
# The following only excludes the properties of TScrollbox itself, but not the controls
# contained in Scrollboxes (except other classes explicitly listed here, like
# TEdit above):
# TScrollbox
[exclude-form-instance]
# This section prevents a certain instance (=object) of a class in a certain form file to
# be excluded from scanning.
# Each value is exactly one file with one instance. The format is
# "filename:instancename". On Windows, the "filename" part is not
# case-sensitive. You can use relative or absolute paths.
# Note that if the instance is something like a container or menu,
# everything belonging to that will be excluded.
# Note also that a frame on a form might contain a component with the
# same name as a component on the form. They would both be excluded.
# Unit6.dfm:Popupmenu
# Unit6.dfm:Label5
You should probably try running
msgmkignore filenamethatcontainsextrajunk.po -o autogenignore.po
Then open up the autogenignore.po and find the special way it has declared all your fieldname excludes (Thats the job of msgmkignore). Every time you auto-generate it you then have to review the auto-generated exclusion rules. It seems you're trying to generate all your exclusion rules by hand. It looks to me like you'd be better off taking the auto-generated includes and reviewing them by hand to exclude all the database field names and column names.
You obviously can't hand the entire job of "ignores" to the msgmkignore tool, as you state in your question, but you can use your brain, plus this tool, and combine those results.

Resources