Does Jenkins have anything like TeamCity's service messages? - jenkins

TeamCity has a feature that (as near as I can figure) is called "service messages". You can see the documentation here. Essentially, it lets me write things like
##teamcity[publishArtifacts '<path>']
to tell the build server to do things. I like this feature. It lets me include the build server steps in my build scripts (and thus in source control) rather than as a configuration on the server. This makes migrating to a different server or recovering from disaster more reliable, "documents" this behavior, and allows multiple builds to leverage it without additional configuration. It's several less things people have to remember to set up when they make new build configurations, and it's much easier to write print '<message>' than it is to load the build server's web interface and drill through several pages looking for the right configuration page.
I've looked around, but I haven't been able to find anything that does this for Jenkins. Does Jenkins have anything similar?

Related

Fitnesse wiki file persistence options

What are the persistence options for fitnesse files? So far it seems like a file system is the only thing supported. There does appear to be an out of date database plugin. Is there anything else that is supported (S3, database, etc.)? Is there a way to control where files are persisted if using the filesystem?
I believe there is very little in that area. The location of the files can be controlled using a command line option. See http://fitnesse.org/FitNesse.FullReferenceGuide.UserGuide.QuickReferenceGuide#FitNesseCommandLINE
-d /path/to/fitnesse/root
How I've used the FitNesse wiki is as a local development tool, with the pages on the file system. Once I'm satisfied with the tests I commit them to version control (e.g. git) so that they become part of the (integration) test pipeline setup (e.g. they are run as part of the CI/CD pipeline of the project).
There is a plugin I believe that will automatically commit any save actions to Git, but I've never used that. Saving each edit action just pollutes version control in my opinion. I only want to see tests after they have been checked/completed, and that tends not to be each save.
Working on a shared wiki environment (where I would expect a non-file system approach would fit in) you run into the same problem, I expect. Developing automated tests is a development task that requires some iterations before it is 'done', and not all attempts reach that 'done' state. So using shared storage for wiki persistence creates 'noise' in the test-set: which are the tests that form the current reference set that should pass and what is work in-progress.
If you are working on a larger project where new features are developed together with their automated tests it becomes even more important to know which test changes belong to which features/changes. Having tests on the file system, in version control, allows you to develop test in sync with code changes in the same branch. This is what I would recommend.

versioning and deployment of application configuration files to server

We use visual svn for version control. I have few cloud web servers where my websites are running.
I would like to create some repositories for the websites content. I checkout them in local editors (notepad ++), edit them and checkin to SVN. But when check-in to visualSVN, I would like them to get deployed to the webservers docroot. In some cases I would like to restart the webserver too.
Is it possible using Jenkins+deployment plugins. I am very new to jenkins, can somebody help me with some information how we can achieve this.
It is one of the scenarios Jenkins is designed for (Continuous Delivery, aka. CD). Your perfect plan might look like this:
Get a new instance of Jenkins up/running (for experiments) (if you're familiar with Docker it is one of the best ways to experiment with Jenkins);
Configure Subversion Plugin in Jenkins (integration with SVN);
Setup your first FreeStyle job in Jenkins that polls your Visual SVN server for changes (things you check-in to SVN) and learn how that works (* * * * * <~-
this polls changes from your source control an every minute, great for experiments);
Setup your second FreeStyle job that connects to one of your webservers (probably via SSH) and creates a file (simple "touch hello_world.log" is great to start with) in a special folder dedicated for that kind of tests (DO NOT MESS WITH YOUR PRODUCTUION CONTENT FOLDER(s));
Setup your third FreeStyle job that combines your experiences acquired in #1 and #2, and still writes to a test folder;
Compare results of the job output with your production deployment expectations (eq. files in place, content is processed the right way, configuration files are looking good and etc.);
Try it out on one of the production web servers, one folder/site at a time;
Apply your newly crafted delivery pipeline to the rest of servers/sites;
Learn how backup your Jenkins instance and actually make your first backup;
Try to restore your Jenkins instance from the backup made in the previous step;
Decide whether it is okay for you to maintain your own Jenkins instance or you will be better off with a hosted version of it (CloudBees Inc.);
Learn more about Pipeline in Jenkins and possibly (because it is not immediately obvious) migrate your FreeStyle job(s) to Pipeline DSL
and/or Jenkinsfile;
At times you might need to get back to "Get Started with Jenkins" manual and look up for the ideas or answers, it is okay - do not give up and feel free to post your questions here, at SO.
Hope these ideas will help you to get started.

Create a Glassfish 3 domain as part of ant build?

I have a JEE6 project based on Glassfish 3.1.1 that is moving beyond the "one developer prototype" stage to being developed by a team.
Each member of the team will have their own local glassfish server. I don't want each of them to have to go through all the manual steps of setting up the JDBC connection pool, JMS services, jdbc security realm, etc via the admin console, as I did when first developing the prototype. It is error prone, and plus if I want to change something I have to tell everyone what to do. I want it to be done as part of the ant build, so that it is a one-clicker, and then if I have to change something I can just tell them to do a clean to blow away the domain and then run it again. So there would be an ant task to "config-glassfish" that would somehow configure the domain for them.
Despite extensive searching, I can't seem to find any step-by-step guide of how best to accomplish this. Anyone have a link?
Would it be best to attempt to capture the fully configured domain and store that in our src repository?
Or should I instead have ant issue "asadmin" commands to create and configure the domain?
You can do all of this with the sun-appserv-admin ant task. You can find more information here: http://docs.oracle.com/cd/E19316-01/820-4336/beaev/index.html
We struggle with this kind of thing at my work too, but only with a few developers. One thing I really like is that Glassfish has the concept of a resources.xml which will cover a lot of the config. I use this to pass around connection pool configs and JMS queues and it works really well, but it might not cover all your config needs. The contents of the file are pretty much snippets from the domain.xml, and I haven't figured out everything it can do yet. http://docs.oracle.com/cd/E19798-01/821-1751/ggoeh/index.html http://javahowto.blogspot.com/2011/02/sample-glassfish-resourcesxml.html
I haven't tried other ideas since the resources.xml solves my major pain points, but you could take your domain.xml and work through any issues brought up by copying it to another developer's domain, then do variable replacement on the part of the file that need it. That way you could have ant create the domain, then overwrite the domain.xml with the newly filled out one.
Maybe there is a way you could use asadmin backup-domain
One other idea would be Chef. http://wiki.opscode.com/display/chef/Home
I ended up just putting the domain.xml into the src repository, making an ant task to copy it over to the glassfish directory, and instructing other developers that when running that ant task, they should make sure glassfish is not running.
This worked for my case...

ASP.NET MVC and multiple environments

How does ASP.NET MVC, if at all, deal with or provide ways to create your application using multiple environments? For example:
Development environment (local machine, probably run via the built-in web server and talking to a local database)
Testing (runs against a preloaded databse with example data, although this part could be skipped and mocks could be used)
Production database on a real server with real data
Ruby on Rails has the concept of environments and "automagically" can deduce if you're in development or production, so you can specify your connection information (connection string) in a config file and the framework dynamically pulls the appropriate one. Is there a similar way of doing things with .NET MVC? If not then how are professional developers using .NET MVC handling different environments?
The only way I can think of is to manually add an "environment" global method (or use an enum, or something like that, maybe this is a use for something like the State pattern?) and store the different connection strings in the web.config file, and then create a base class which all data access classes derive from which provides a way to obtain the connection string for the current environment; this would then have to be set to production when the time comes to put the application live.
Is there another way? Most of the .NET MVC videos and articles I've seen don't even bother with separate environments but only use a development database and don't indicate how you do it in production.
I'd say this is really a question of your company's internal processes. Since every company is a little bit different it's hard to have a "right" generic way to support dev/test/alpha/production and/or other environments.
One way: Create a setup program that supplies the correct connection string based on the environment chosen during the setup process.
Another way: System Admin edits web.config file to supply correct connection string during install.
Yet ANother Way: Connection strings are stored in the system registry.
Even Another Odd Way: You have all your connection strings for all environments in web.config, then a setting in appSettings the tells you which one to use.
Depending on the client, I've done all of these. There are more but these are the more popular.
(One client wanted to store the connecting string in the data base itself. Really.)
You can use alias for your database. You just point these aliases to different servers in the different environments. Stored in the registry under HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\MSSQLServer\Client\Connect if i remember right. Then you use the alias in the connectionstring.
In response to Jason's response:
We use Enterprise Library Environments to configure the different environment paramters and via msbuild invoke the Merge Configuration Tool that generates the different configs for each environment. The deploy process picks the right config file depending on which environment to install.
I was able to solve a similar situation following these steps:
In your Visual Studio, access Build > Configuration Manager
Click in "new"
Choose a name for your configuration, and then copy settings from an existing config. After the configuration creation, it will be available for you to target as build configuration
Create a Web.{env-name-you-chose}.config in your application folder, along with the original Web.config file.
Open your .csproj file with Visual Studio or any text editor
Search for a section that looks like the following and add the highlighted lines, with the config file name you gave previously:
Open your Visual Studio, reload projects if it's required, and now you are able to choose your configuration via CLI or manual publish using Visual Studio.
There is a Publishing Wizard (in Visual Studio) wich let's you change parts of web.config for release build automaticaly. Wich happens to be the feature you are asking about. No magic thou.
What we have done is during our automated build process (Hudson), we alter values in web.config depending on which environment the build is for. Unfortunately there isn't a magical way to do this.
For deployment, which I assume that is what the op was asking about, one creates multiple configurations and in the publish, picks a different configuration. These are called transforms and they operate on the web.config. One would have at least three publish profiles, one for dev, test and prod. One can change more than just the connection string in this way. One can turn on custom errors, turn off debugging and change values of configuration variables. I highly recommend it.
I have a similar question. I have a log table reader. I want it to read log tables in the development, test and production databases. The major difficulty lies in my user account doesn't have permission to look at test and production. It's some silly security thing. The user that I'm impersonating in the application does have permission. I'm struggling trying to tell MVC to build the test and production models using the impersonated user.

Is there a better way of viewing team build logs?

Currently the msbuild logs for team build are appalling as they are just plain text and are very difficult to read. Also the ones created by my build are approx 30Mb and take quite a while to download (our TFS server is in our datacentre).
Does anyone know any way of being able to view these logs easier, prefereably integrated with either TFS itself or TFS WebAccess?
Take a look at the following blog post I did a while ago:
http://www.woodwardweb.com/teamprise/000415.html
This describes how to create a simple ASP.NET page that will stream the contents of your log file to you over HTTP. The advantage of doing it this way is that you don't have to wait for the entire page to load before the log starts to render for you in Visual Studio.
Also - you can add some simple formatting to the file while streaming. In the example on my blog I simply make the start of each target appear in bold to make them stand out a bit more, but you can see how you could go crazy with this approach if you wanted.
If increasing bandwidth isn't an option then I would suggest you to write your own html logger and attach it to the build process. Splitting the html build log into minor parts (definded by targets and/or projects) and having one index file pointing to all the minor parts with appropiate information whether a given part failed or succeeded. Then you only need to parse the index file and any requested part over the link.
A third possibility is to compress the log-file after the build completes.

Resources