Robot framework is keyword base testing framework. I have to test remote server so
i need to do some prerequisite steps like
i)copy artifact on remote machine
ii)start application server on remote
iii) run test on remote server
Before robot framework we do it using ant script
I can run only test script with robot . But Can we do all task using robot scripting if yes what is advantage of this?
Yes, you could do this all with robot. You can write a keyword in python that does all of those steps. You could then call that keyword in the suite setup step of a test suite.
I'm not sure what the advantages would be. What you're trying to do are two conceptually different tasks: one is deployment and one is testing. I don't see any advantage in combining them. One distinct disadvantage is that you then can't run your tests against an already deployed system. Though, I guess your keyword could be smart enough to first check if the application has been deployed, and only deploy it if it hasn't.
One advantage is that you have one less tool in your toolchain, which might reduce the complexity of your system as a whole. That means people can run your tests without first having installed ant (unless your system also needs to be built with ant).
If you are asking why you would use robot framework instead of writing a script to do the testing. The answer would be using the framework provides all the metrics and reports you would otherwise script for yourself.
Choosing a frame work makes your entire QA easier to manage, save your effort to write code for the parts that are common to the QA process, so you can focus on writing code to test your product.
Furthermore, since there is an ecosystem around the framework, you can probably find existing code to do just about everything you may need, and get answers to how to do something instead of changing your script.
Yes, you can do this with robot, decently easily.
The first two can be done easily with SSHLibrary, and the third one depends. Do you mean for the Robot Framework test case to be run locally on the other server? That can indeed be done with configuration files to define what server to run the test case on.
Here are the commands you can use from SSHLibrary of Robot Framework.
copy artifact on remote machine
Open Connection
Login or Login With Private Key
Put Directory or Put File
start application server on remote
Execute Command
For Running Tests on Remote Machine(Assuming Setup is there on machine)
Execute Command (use pybot path_to_test_file)
You may experience connections losts,but once tests are triggered they will be running on remote machine
Related
We create iOS and Android apps that are white-labeled. They all use a single code base (one for iOS and one for Android). Whenever we need to make changes to all of our apps (> 100 live in App Store) we rely on Fastlane. We have a "bulk" command that submits each new build to Apple, changing out config variables first and a few files so each app is unique.
This has worked well for us... but... its getting really slow. We'd love to be able to take advantage of some of the continuous development services out there. It seems like they weren't necessarily made for this use case but it might still work?
Ideally instead of running bulk on a local machine we could spin up 100 instances on something like CircleCI and they all run side by side, using our fastlane script to build, submit, etc.
We started by looking into CircleCI. The problem we are running into is they don't allow injection of variables into a job (https://ideas.circleci.com/ideas/CCI-I-690).
Is there a better service for this goal? Is there a tool that was built to achieve this? Struggling to find an alternative to hacking together a bunch of smaller tools.
I think you already identified your first step: You will have to split your fastlane (and other tooling) configuration, so it is possible to build each app in isolation.
Then you can trigger a job for each app on a CI service like for example Travis CI or Azure Pipelines (both have a simple API you can use to start jobs and give them some parameters that will be available to your job) that builds and releases the app.
All the other things (e.g. one big build vs. many small build steps etc.) are just implementation details and will depend on the individual service or tools you choose.
I have a configured CI with TFS. What are the best ways to organize post-build (or even better post-test) deployment. My binaries are some libraries with single executable file.
Here is what I need:
Build on each commit. (This is configured and done)
When build is successful (or tests), grep binaries and drop it to some specific folder on the same build machine with full replacement of previous files and folders. (I`d like to be able to configure somehow the folder location)
Launch the application with some parameters and I need to have standart output redirection. For example: App.exe param=paramValue > log.txt
And before starting the application I need to kill the previous instance of it. (This is some kind of server instance that is alive all the time)
The most obvious solution that I tried was to do this with post-build script. But this try failed. See here
Use Release Management in conjunction with PowerShell (or better still, Desired State Configuration) scripts. Depending on your MSDN licensing, it could be free for you, and it's specifically designed from the ground up to handle managing releases.
Overextending the build process to also do deployment is an awful idea. The build tools were designed to build, and they're good at it! They're not good at the types of considerations you have when you're trying to do deployments.
The problem is that most CI solutions (TFS included) would get you to the point where you had binaries, then say "Welp, you're on your own! Have fun figuring out how to deploy this stuff!" This never ends well -- you end up with something inflexible and very difficult to troubleshoot and maintain.
The modern "devops" approach here is to have your application's requirements in source control, treated as code (in this case, as a DSC script or scripts).
One other consideration: It sounds like you're trying to treat a console application as a service. This is going to be a big, big pain for you, since most software that handles releases will not run in an interactive session. Turn it into a true Windows service and your life will be easier.
I am setting up my local build using Ant and have decided to use RabbitMQ. I would like to have any Ant task that I can use to configure my local installation for setting things up (stop, start, create queues etc..) and tearing them down as part of my test suite.
Has anyone come across anything like this?
I described a scenario in this question there the op was looking for a way to declare queues and bindings without the overhead of doing it at runtime.
In my solution I use a console utility to perform the queue declarations and have this called from a build step in my build server when running builds and tests.
During the normal course of coding and integration testing from the IDE, I simply make sure that I have used the utility fairly recently to make sure the queues have been established as per the current XML definitions. My test setups ensure that the queues themselves are empty before running.
Hope this helps.
Steve
Ant is a build tool. While running your automated tests is generally part of a build process, the setup of your queues are part of your specification's context and should be included in your tests. If you truly have a need to configure your exchanges and queues once before all test runs, many frameworks provide a facility to do this.
Has anybody discovered any means to fire an ant build process automatically based on file system changes?
I basically want my ant build system to begin building similar to an IDE (compile java classes) but from some sort of command line service.
If not, there's always coding one up with Java and integrating the Ant API into it.
I am familiar with continuous integration systems like Jenkins and the like, however I need the build to be fired not check-in. Also I would like it to be independent of the IDE, as that could work on post-save.
I'm looking for an independent build service without source control requirements.
Since you are using ant I assume a java based directory polling program will help here. You can write a program using IO notification api
Notes from the page
When to Use and Not Use This API
The Watch Service API is designed for applications that need to be
notified about file change events. It is well suited for any
application, like an editor or IDE, that potentially has many open
files and needs to ensure that the files are synchronized with the
file system. It is also well suited for an application server that
watches a directory, perhaps waiting for .jsp or .jar files to drop,
in order to deploy them.
This API is not designed for indexing a hard drive. Most file system
implementations have native support for file change notification. The
Watch Service API takes advantage of this support where available.
However, when a file system does not support this mechanism, the Watch
Service will poll the file system, waiting for events.
Edit
After I wrote that this question and its answer seems to be more useful here: Is there a sophisticated file system monitor for Java which is freeware or open source?
The widely practiced way is a way of "continuous build" / "continuous integration". A sample work-flow:
You check in your code into a source control repository
Continues Integration server picks up changes from the repository and starts a build process
The build process results in either success or failure giving you a fast feedback
Lot's of continuous integration servers (Bamboo, Jenkins, Go) support Ant natively.
You can also set up post-save hooks in your IDE. Most modern ones support it: IntelliJ, NetBeans, Eclipse.
Look up "continuous integration" in google.
A friend pointed me to this:
https://serverfault.com/questions/179706/how-can-i-trigger-a-script-to-run-after-the-rsyncdaemon-received-file-changes-to
A little script to monitor for changes and execute and independent task.
If anyone has a better method that works with ANT scripts directly, let me know.
What kind of agile tools are you using for Erlang development? What continuous integration (CI) server are you using to build Erlang code? The only reference I got was from Quora question How do I integrate Erlang unit tests in Jenkins (Hudson)?.
I am also interested in the nifty details of setting them up and making talk to each other.
As a company using Erlang actively, Klarna (www.klarna.com) use Jenkins (formerly Hudson) for daily regression test on nearly every dev commit. It's an org with about 80 people total in rnd and we use distribute mode of Jenkins which allows us to have more than 10 build slaves mastered by only one Jenkins server. Basically we have a code base with Eralng code which is version controlled by tools like svn or git. All these testcases are under common test framework and all works well under Jenkins.
Previously, we tried Cruise Control and gave it up since Jenkins does much better.
As Lukas mentioned, you probably will need a tool to gen xml files sine common test doesn't export them directly. Haven't really tried that module though, we do have an implementation of common test event handler to do the job, but it was abandoned due to performance, we do have a a critical requirement on test time. right now, we use a own made script to export xml from common test log directly.
There are a lot more you could do with Erlang and Jenkins, like code coverage analyze if you compile properly and export formatted xml to Cobertour plugin, gui test with selenium etc.
For setting up Jenkins, I think Jenkins home page has a good introduction.
Regarding agile tools, I guess it's really hard to define what a agile tool. Also what I believe is it's very much depend on the size of you org. You will probably need a good process view tool (team level or depart level), a good ticket tracking tool, code review tool, communication tool. There are bunch of them implemented under open source. According to our exp, none of them seems to be able to work seamlessly with Jenkins which means you will need to select and tweak by your own requirement. BUT that's the beauty of open source isn't it :)?
If you want to do it using Jenkins, I have written a common test hook which generates JUnit XML output for your tests which Jenkins can use to produce test statistics.
https://github.com/garazdawi/cth_tools/blob/master/src/cth_junit.erl
We use Jenkins for our Python code, so I think you may use Jenkins with Erlang code.
We use buildbot with our own recipes to hook unit tests.