Compare two neoload scripts contents - neoload

I need to compare two neoload projects (lets say project1 and project2). project1 has ~120 user paths and project2 has ~100 user paths. I need to copy all the differences from project 2 to project 1 including any additional user paths, request body change, user parameter and population. Manually opening each request and comparing is taking lot of time. I don't have /config.zip folder. Instead I have /team folder which has separate populations, vus etc folders. I tried to compare each xmls of both the projects from /team/vus/#userpathname/action-container. But its taking lot of time. Is there an easy way to compare two neoload scripts?

Related

setup for multiple swagger API files

I am working on a project where we rewrite the interfacing of an existing application, porting everything to swagger/openAPI.
Right now, each feature has its own yml file right now, which is a standalone spec. But there are some drawbacks:
duplicated content in the yml files (e.g. models which could be shared accross files)
duplicated program code (which is generated from those yml files).
having to process each yml file individually when using tools.
Ideally we would like to have a seperate folder for each service, with the models and service description for that specific service close together, but separated from the other services. Of course there are also shared models, which we then want in a different folder (e.g. "/shared-models"). And finally we want all those files to be included by 1 main yml root file.
So, we have been looking at splitting/importing files with a $ref attribute. But it is tricky to come up with a full-scale file and folder structure, because the spec seems to allow usage of $ref on some places, but not all places. You can't just split and structure files any way you like. So, we will probably need some kind of trade-off.
I was especially wondering how other companies do this setup. (e.g. an example of a setup that uses an enterprise level structure of swagger files, would be excellent.) We like to keep things simple and whenever possible according to standards or popular conventions.
(For clarity: my question is not: "how to use $ref")

Artifactory docker images without manifests

We have a number of broken docker image uploads in Artifactory. It's quite difficult to clean these up, since the package search feature does not find these image tags as packages. In the UI, the only way to remove these without search is 1 tag at a time. I'm curious as to whether anyone else has found a solution for this. Ideally, if there were some AQL or other method to identify and remove any folder in a docker repo that does not contain a manifest file.
You can try creating AQL Query. AQL has capabilities to search for artifacts based on properties which will help you in achieving clean up the way you want. https://www.jfrog.com/confluence/display/RTF/Artifactory+Query+Language
I don't think you can trap this with a single AQL,
but here is an idea that uses 2 AQLs -
Prepare a list of all paths that contain a manifest.json file
Prepare a list of all paths that contain sha256__* files
(will need to make it unique, because the same path will be
listed multiple times, probably)
Sort the two lists and compare them to each-other
Lines (i.e.: paths) that are showing only in the second list are
paths to broken images that are missing their manifest file
Now, after confirming the result-list (from step 4) is correct,
you can construct from it a set of DELETE API-calls (one for each path).

TFS Copy Files action exclude syntax

I'm trying to understand the logic behind TFS 2013 Copy Files action's Content filter syntax. I have a simple solution with a couple of projects and some test projects, which unfortunately have huge test files. I don't want the test projects or the files in the drop folder. Test projects are named like Project.Test, Project2.Test etc.
I tried to filter them like this:
**\bin\$(BuildPlatform)\$(BuildConfiguration)\**;-:**\*.Test\**
But this doesn't copy anything.
Test files are located in directories called TestFiles.
This also filters out everything:
**\bin\$(BuildPlatform)\$(BuildConfiguration)\**;-:**\TestFiles\**
If the first part matches the correct files in bin directory, how can the second part them filter them out? I don't see how that can match the same files.
Solution to my problem was to use a different syntax to exclude certain directories (*.Test here):
**\!(*.Test)\bin\$(BuildPlatform)\$(BuildConfiguration)\**

Are cloud dataflow job outputs transactional?

Assuming I don't know the job status that was supposed to generate some output files (in cloud store), can I assume that if some output files exist they contain all of the job's output?
Or it's possible that partial output is visible?
Thanks,
G
It is possible that only a subset of the files is visible, but the visible files are complete (cannot grow or change).
The filenames contain the total number of files (output-XXXXX-of-NNNNN), so once you have one file, you know how many more to expect.

How to read list of files from a folder an export into database in Grails

I have lots of files of same type like .xml in a folder. How can I select this folder from interface and iterate over each file and send this to appropriate tables of database.
Thanks
Sonu
Do you always put the files in the same directory? For example, if you generate these files in some other system, then just want the data imported into your application you could:
Create a job that runs every X minutes
It iterates over each file in the directory and parses the XML, creating and saving the objects to the database
Shifts or deletes the files when it has processed them
Jobs are a Grails concept/plugin: http://www.grails.org/Job+Scheduling+(Quartz)
Processing XML is easy in Groovy - you have many options, depends on your specific scenario - http://groovy.codehaus.org/Processing+XML
Processing files is also trivial - http://groovy.codehaus.org/groovy-jdk/java/io/File.html#eachFile(groovy.lang.Closure)
This is a high level overview. Hope it helps.

Resources