How to place some important information like credentials in scenario outline example in a separate file for Specflow 3.9.22? - specflow

For specflow 2.4 and earlier there was Specflow.Plus.Excel nuge-package, but it's not available for newer versions. Is there an alternative for having scenario outline example in a separate file so that I can add that file to .gitignore?
Additional clearification after comments with questions:
What I'm trying to do is to put multiple credentials into scenario outline examples, but I don't want to store credentials in the repo while I want to store scenario outline in repo. So I wanted to put examples with multiple credentials in a separate file, add that file to gitignore but make scenario outline use that separate local file as a source of parameters (examples). In earlier versions it was available via excel file.

Related

Best way to migrate single project from Jira 5.0 to Jira 8?

I want to move a single project from a Jira 5.0 instance to a new Jira 8.0 instance being already used for other projects - so the process must not bring in configurations, workflows, etc. nor should alter existing projects.
I'm only interested in importing issues and related data:
title, description, etc (obviously)
attachments (images, files, whatever)
issue links
issue type (with mapping to new types in case they don't match)
... (other properties that I'm forgetting right now)
I've just started searching for the topic and already found several options - and it's not clear if they're all available to be, mosly due to the starting Jira version, they are:
Export to CSV and import to CSV
Export to XML
Import from JSON (though I've yet to find a JSON export)
Rest API
Import project from backup
... and surely others
Of course I'd like the most complete yet less error-prone method, though if resorting to the REST API will be the only way to be sure to import all I want, I'm ready to write a script / program.
So, what should I choose?
P.S.: I'm not sure if this fits this community, is there a more proper one?
The easiest way is to get csv export, get all attachments (jira_home/data/attachments). Then copy attachments to a new instance to jira_home/import. You'll need to edit export file to match names and paths of your attachments in order to import them successfully.
And last step is import csv to your Jira 8 instance.
I suggest trying this on dev/stage environment first because there are many small details that can affect import.
Some useful data is here:
https://confluence.atlassian.com/adminjiraserver/importing-data-from-csv-938847533.html

Adding new translations

I'm using FormatJS to localize my app. There's a handy CLI to extract all the translations from the code base. I can generate the en.json file, and send it to the translator. When I get the translation back I can save this as fr.json. So far so good.
What I don't understand is what to do when I'm adding new translations in my app. When I run formatjs extract again, I get a new en.js file, with all the keys. Obviously I don't want to send the whole thing again to the translator. I could diff the new en.json against the previous version but it's such a basic step that I feel like I must be missing something? I didn't find anything about this in the docs.
How is this part of the workflow handled with FormatJS?
It seems like translation services typically take care of diffing the data. You send them the whole template file and they send back translation files will all the translated strings (new ones + ones that have already been translated). At least that's how it works with the provider my company uses.
My workflow is as follow:
add new translations in the source code with intl.formatMessage()
formatjs extract to create the new en.json file (template file)
replace the translation files (e.g. es.json, fr.json etc.) with the new ones from the provider
formatjs compile to generate the machine files
I also created a test to ensure that each key in en.json has a corresponding key in each translation file.

setup for multiple swagger API files

I am working on a project where we rewrite the interfacing of an existing application, porting everything to swagger/openAPI.
Right now, each feature has its own yml file right now, which is a standalone spec. But there are some drawbacks:
duplicated content in the yml files (e.g. models which could be shared accross files)
duplicated program code (which is generated from those yml files).
having to process each yml file individually when using tools.
Ideally we would like to have a seperate folder for each service, with the models and service description for that specific service close together, but separated from the other services. Of course there are also shared models, which we then want in a different folder (e.g. "/shared-models"). And finally we want all those files to be included by 1 main yml root file.
So, we have been looking at splitting/importing files with a $ref attribute. But it is tricky to come up with a full-scale file and folder structure, because the spec seems to allow usage of $ref on some places, but not all places. You can't just split and structure files any way you like. So, we will probably need some kind of trade-off.
I was especially wondering how other companies do this setup. (e.g. an example of a setup that uses an enterprise level structure of swagger files, would be excellent.) We like to keep things simple and whenever possible according to standards or popular conventions.
(For clarity: my question is not: "how to use $ref")

Kettle over kettle transform file pentaho CDE

Facing issue regarding kettle over kettle transform step in pentaho CDE, i have created transformation file and it is working perfectly.
Properties of kettle over kettle transform step where i have option of select transformation file, so when i am browsing it i am able to see only 3 folders home,public, etc..
So where i have to keep my transformation file so that i can able to access it while selecting from select transformation file.
You can create a separate folder/directory (e.g.: Transformations) inside any of the already present directories (say: Admin). Next refresh your repository/cache and you will be able to see the files. Link it from your CDE.
Now ideally when building a project, i used to have a separate folder with my project initials say PROJECT. Inside this folder, i used to create the rest of the sub-folders. This helps in separating the project codes.
Hope this helps :)
Edit:
The files in user console cannot be accessed from your local system post pentaho version 5. The only way to upload or download file is either to load it from User console or execute commands from command line. Check the below link. The files are internally stored in jackrabbit repository of pentaho bi server.
http://infocenter.pentaho.com/help/index.jsp?topic=%2Fadmin_guide%2Ftask_import_export_repository.html
Extra note: If you still want to access the files, there is REST API to handle most of the pentaho bi server capabilites. You may check this link also: http://help.pentaho.com/Documentation/5.2/0R0/070/010/0A0/0Q0#

File repository in ruby on rails

I would like to create a simple file repository in Ruby on Rails. Users have their accounts, and after one logs in they can upload a file or download files previously uploaded.
The issue here is the security. Files should be safe and not available to anyone but the owners.
Where, in which folder, should I store the files, to make them as safe as possible?
Does it make sense, to rename the uploaded files, store the names in a database and restore them when needed? This might help avoid name conflicts, though I'm not sure if it's a good idea.
Should the files be stored all in one folder, or should they be somewhat divided?
rename the files, for one reason, because you have no way to know if today's file "test" is supposed to replace last week's "test" or not (perhaps the user had them in different directories)
give each user their own directory, this prevents performance problems and makes it easy to migrate, archive, or delete a single user
put metadata in the database and files in the file system
look out for code injection via file name
This is an interesting question. Depending on the level of security you want to apply I would recommend the following:
Choose a folder that is only accessible by your app server (if you chose to store in the FS)
I would always recommend to rename the files to a random generated hash (or incremntally generated name like used in URL shorteners, see the open source implementation of rubyurl). However, I wouldn't store them in a database because filesystems are built for handling files, so let it do the job. You should store the meta data in the database to be able to set the right file name when the user downloads the file.
You should partition the files among multiple folders. This gives you multiple advantages. First, filesystems are not built to handle millions of files in a single folder. If you have operations that try to get all files from a folder this takes significantly more time. If you obfuscate the original file name you could create one directory for each letter in the filename and would get a fairly good distributed number of files per directory.
One last thing to consider is the possible collision of file names. A user should not be able to guess a filename from another user. So you might need some additional checks here.
Depending on the level of security you want to achieve you can apply more and more patterns.
Just don't save the files in the public folder and create a controller that will send the files.
How you want to organise from that point on is your choice. You could make a sub folder per user. There is no need to rename from a security point of view, but do try to cleanup the filename, spaces and non ascii characters make things harder.
For simple cases (where you don't want to distribute the file store):
Store the files in the tmp directory. DON'T store them in public. Then only expose these files via a route and controller where you do the authentication/authorisation checks.
I don't see any reason to rename the files; you can separate them out into sub directories based on the user ID. But if you want to allow the uploading of files with the same name then you may need to generate a unique hash or something for each file's name.
See above. You can partition them any way you see fit. But I would definitely recommend partitioning them and not lumping them in one directory.

Resources