I have a project where a number of 'environments' are running simultaniously: Local development environment (VS), Dev, Test and Prod.
We now wish to expand the program suite with a 'server application' to process background assignments as calculations and mail sending.
I'm trying to find best practice for this situation.
I'm thinking that it should be a windows service.
As a result, I need to have three copies of the service running (Dev, Test and Prod) and preferable on the single server assigned as our application server. I'm thinking I can copy relevant exe to separate directories and 'somehow' instruct each service which environment it is supposed to connect to.
It's important to notice that the three services would not nessesarily be running the same release of the code.
What is the best practice for doing this?
Any input appreciated!
Anders, Denmark
Definitely sounds like Windows Services would be the right call. These services would be daemons, running independently from each other.
I recommend against creating 3 executables. Stick with just one as it is easier for deployment.
Have your exe take a command line parameter telling it which environment it should run, and fire off the appropriate part of the code.
Its then pretty easy to start, stop and query your services.
Let me know your thoughts!
Use an application config file for each installed instance of the service executable so you can set the environment to run against.
Related
I'm using SCDF and i was wondering if there was any way to configure default properties for one application?
I got a task application registered in SCDF and this application gets some JDBC properties to access business database :
app.foo.export.datasource.url=jdbc:db2://blablabla
app.foo.export.datasource.username=testuser
app.foo.export.datasource.password=**************
app.foo.export.datasource.driverClassName=com.ibm.db2.jcc.DB2Driver
Do i really need to put this prop in a property file like this : (it's bit weird to define them during the launch)
task launch fooTask --propertiesFile aaa.properties
Also, we cannot use the rest API, credentials would appear in the url.
Or is there another way/place to define default business props for an application ? These props will be only used by this task.
The purpose is to have one place where OPS team can configure url and credentials without playing with the launch command.
Thank you.
Yeah, SCDF feels a bit weird in the configuration area.
As you wrote, you can register an application and create tasks, but all the configuration is passed at the first launch of the task. Speaking other way round, you can't fully install/configure a task without running it.
As soon as a task has run once, you can relaunch it without any configuration and it uses the configuration from before. The whole config is saved in the SCDF database.
However, if you try to overwrite an existing configuration property with a new value, SCDF seems to ignore the new value and continue to use the old one. No idea if this is intended by design or a bug or if we are doing something wrong.
Because we run SCDF tasks on Kubernetes and we are used to configure all infrastructure in YAML files, the best option we found was to write our own Operator for SCDF.
This operator works against the REST interface of SCDF and also compensates the weird configuration issues mentioned above.
For example the overwrite issue is solved by first deleting the configuration and recreate it with the new values.
With this operator we have reached what you are looking for: all our SCDF configuration is in a git repository and all changes are done through merge requests. Thanks to CI/CD, on the next launch, the new configuration is used.
However, a Kubernetes operator should be part of the product. Without it, SCDF on Kubernetes feels quite "alien".
I am looking into how to set up a liferay project with version control and automated deployment. I have a working local development environment in eclipse, but as far as I understand it, setting up a portal in liferay is in part the liferay portal instance running on tomcat and then my custom module projects for customization. I basically want all of that in one git repository which can then be
1: cloned by any developer to set up their local dev environment
2: built and deployed by eg. jenkins into eg. AWS
I have looked at the liferay documentation regarding creating a docker container for the portal, but I don't fully understand how things like portal content would be handled.
I would be very grateful if someone could lead me in the right direction on how a environment like this would be set up.
Code and content are different beasts. Set up a local Liferay instance for every single developer. Share/version the code through whatever version control (you mention git).
This way, every developer can work on their own project, set breakpoints, and create content that doesn't interfere with other developers.
Set up a separate integration test environment, that gets its code exclusively through your CI server, never gets touched manually.
Your production (or preproduction) database will likely have completely different content: Where a developer is quick to create a few "Lorem Ipsum" posts and pages, you don't want them to escape into production. Thus there's no movement of content from development to production. Only code moves that way.
In case you want your developers to work on a production-like environment, you can restore the production content (database) to development machines. Note that this is risky though: The database also contains user accounts, and you might trigger update notification mails from your development machines - something that you want to avoid at all costs. Plus, this way you give developers access to login data (even though it's hashed) which can be abused. And it might even be explicitly forbidden by industry regulations to use production data in development environments.
In general: Every system has its own database (at least their own schema), document store and indexing server. Every developer has their own portal JVM running. The other environments (integration test, load test, authoring, production) are also separate environments. And no, you don't need all of them all the time.
I can't attribute this quote (Milen can - see his comment), but it holds here:
Everybody has a testing environment. Some are lucky to run a completely different production environment.
Be the lucky one. If everyone has their own fully separated environment, nobody is stepping on each other's shoes. And you'll need the integration tests (with the CI output) anyway.
I am using Spinnaker to deploy a 3-tier system to QA and then to Production. Configuration files in each of these systems point to others. If I bake in the configuration for QA in the AMI, then how do I change it while promoting to Prod? Is it 1) by having two different sets of AMIs - one for QA and one for Prod, or, 2) by having the AMIs with no configuration and then configure it (somehow) after deployment to change the configuration files?
What is recommended?
You can define custom AWS user data for cluster at deploy time ( under advanced settings of the cluster configuration ). You can then retrieve this user data in your application. This will allow you to change these type of configurations.
At Netflix, we have a series of init scripts that are baked into the base image and provide a mechanism for extending custom startup ( init.d ) scripts via nebula / gradle. This usually sets values like NETFLIX_ENVIRONMENT that are well known and programmed against.
We also use a feature flipping mechanism via https://github.com/Netflix/archaius . This allows us to add properties that are external to the clusters but can be targeted towards them.
When it comes to secured credentials, the approach is outlined in this presentation, but essentially the images reach out to an external service that issues these type of creds. https://speakerdeck.com/bdpayne/key-management-in-aws-how-netflix-secures-sensitive-data-without-its-own-data-center
I am struggling with similar problems myself in our company.
My solution was to create AMIs for specific purposes using a Packer script. This allows me to -
1. Configure the server as much as I can and then store those configurations in an AMI.
2. Easily change these configurations if the need arises.
Then, launching the AMI using an Ansible script, and make all the rest of the configurations on the specific instance.
In my case I chose creating different images for staging and production, but mostly because they differ greatly. If they were more alike I might have chosen using a single AMI for both.
The advantage Ansible gives you here is factoring your configurations, and including written once to both production and staging servers.
I have a JEE6 project based on Glassfish 3.1.1 that is moving beyond the "one developer prototype" stage to being developed by a team.
Each member of the team will have their own local glassfish server. I don't want each of them to have to go through all the manual steps of setting up the JDBC connection pool, JMS services, jdbc security realm, etc via the admin console, as I did when first developing the prototype. It is error prone, and plus if I want to change something I have to tell everyone what to do. I want it to be done as part of the ant build, so that it is a one-clicker, and then if I have to change something I can just tell them to do a clean to blow away the domain and then run it again. So there would be an ant task to "config-glassfish" that would somehow configure the domain for them.
Despite extensive searching, I can't seem to find any step-by-step guide of how best to accomplish this. Anyone have a link?
Would it be best to attempt to capture the fully configured domain and store that in our src repository?
Or should I instead have ant issue "asadmin" commands to create and configure the domain?
You can do all of this with the sun-appserv-admin ant task. You can find more information here: http://docs.oracle.com/cd/E19316-01/820-4336/beaev/index.html
We struggle with this kind of thing at my work too, but only with a few developers. One thing I really like is that Glassfish has the concept of a resources.xml which will cover a lot of the config. I use this to pass around connection pool configs and JMS queues and it works really well, but it might not cover all your config needs. The contents of the file are pretty much snippets from the domain.xml, and I haven't figured out everything it can do yet. http://docs.oracle.com/cd/E19798-01/821-1751/ggoeh/index.html http://javahowto.blogspot.com/2011/02/sample-glassfish-resourcesxml.html
I haven't tried other ideas since the resources.xml solves my major pain points, but you could take your domain.xml and work through any issues brought up by copying it to another developer's domain, then do variable replacement on the part of the file that need it. That way you could have ant create the domain, then overwrite the domain.xml with the newly filled out one.
Maybe there is a way you could use asadmin backup-domain
One other idea would be Chef. http://wiki.opscode.com/display/chef/Home
I ended up just putting the domain.xml into the src repository, making an ant task to copy it over to the glassfish directory, and instructing other developers that when running that ant task, they should make sure glassfish is not running.
This worked for my case...
How does ASP.NET MVC, if at all, deal with or provide ways to create your application using multiple environments? For example:
Development environment (local machine, probably run via the built-in web server and talking to a local database)
Testing (runs against a preloaded databse with example data, although this part could be skipped and mocks could be used)
Production database on a real server with real data
Ruby on Rails has the concept of environments and "automagically" can deduce if you're in development or production, so you can specify your connection information (connection string) in a config file and the framework dynamically pulls the appropriate one. Is there a similar way of doing things with .NET MVC? If not then how are professional developers using .NET MVC handling different environments?
The only way I can think of is to manually add an "environment" global method (or use an enum, or something like that, maybe this is a use for something like the State pattern?) and store the different connection strings in the web.config file, and then create a base class which all data access classes derive from which provides a way to obtain the connection string for the current environment; this would then have to be set to production when the time comes to put the application live.
Is there another way? Most of the .NET MVC videos and articles I've seen don't even bother with separate environments but only use a development database and don't indicate how you do it in production.
I'd say this is really a question of your company's internal processes. Since every company is a little bit different it's hard to have a "right" generic way to support dev/test/alpha/production and/or other environments.
One way: Create a setup program that supplies the correct connection string based on the environment chosen during the setup process.
Another way: System Admin edits web.config file to supply correct connection string during install.
Yet ANother Way: Connection strings are stored in the system registry.
Even Another Odd Way: You have all your connection strings for all environments in web.config, then a setting in appSettings the tells you which one to use.
Depending on the client, I've done all of these. There are more but these are the more popular.
(One client wanted to store the connecting string in the data base itself. Really.)
You can use alias for your database. You just point these aliases to different servers in the different environments. Stored in the registry under HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\MSSQLServer\Client\Connect if i remember right. Then you use the alias in the connectionstring.
In response to Jason's response:
We use Enterprise Library Environments to configure the different environment paramters and via msbuild invoke the Merge Configuration Tool that generates the different configs for each environment. The deploy process picks the right config file depending on which environment to install.
I was able to solve a similar situation following these steps:
In your Visual Studio, access Build > Configuration Manager
Click in "new"
Choose a name for your configuration, and then copy settings from an existing config. After the configuration creation, it will be available for you to target as build configuration
Create a Web.{env-name-you-chose}.config in your application folder, along with the original Web.config file.
Open your .csproj file with Visual Studio or any text editor
Search for a section that looks like the following and add the highlighted lines, with the config file name you gave previously:
Open your Visual Studio, reload projects if it's required, and now you are able to choose your configuration via CLI or manual publish using Visual Studio.
There is a Publishing Wizard (in Visual Studio) wich let's you change parts of web.config for release build automaticaly. Wich happens to be the feature you are asking about. No magic thou.
What we have done is during our automated build process (Hudson), we alter values in web.config depending on which environment the build is for. Unfortunately there isn't a magical way to do this.
For deployment, which I assume that is what the op was asking about, one creates multiple configurations and in the publish, picks a different configuration. These are called transforms and they operate on the web.config. One would have at least three publish profiles, one for dev, test and prod. One can change more than just the connection string in this way. One can turn on custom errors, turn off debugging and change values of configuration variables. I highly recommend it.
I have a similar question. I have a log table reader. I want it to read log tables in the development, test and production databases. The major difficulty lies in my user account doesn't have permission to look at test and production. It's some silly security thing. The user that I'm impersonating in the application does have permission. I'm struggling trying to tell MVC to build the test and production models using the impersonated user.