OctopusDeploy: Is it possible to specify a folder where a Windows service should be deployed? - windows-services

When you deploy a Windows service with Octopus Deploy, you can specifiy the package identifier to be used. As far as I understood this, the package identifier will be used as the folder the Windows service is running in.
E.g. a service called MyService with version number 1.8.7 and environment Production would be stored under C:\Octopus\Applications\Production\MyService\1.8.7.
My question now is if one can specifiy a folder to deploy to. The reason behind this is that there are different instances of MyService for different customers and they both should run on the same machine in our setup.
If the same package (and package identifier) is used for multiple services, can we somehow specifiy a folder to deploy to?
EDIT:
I found out that a second service with the same specification (but other configuration) will generate a folder called C:\Octopus\Applications\Production\MyService\1.8.7_1. In my opinion, it still would be better if the user could influence this behaviour and specify a folder name.

You can set a custom installation directory by clicking on "Configure Features" on your deployment step and checking "Custom Installation Directory".
You can also use variables in the Custom Installation Directory.

Related

Is there a way to automate the creation of a file needed by a container in VS Code Remote-Containers before it runs?

This relates specifically to Remote-Containers with VS Code using a docker-compose file to define the services. One of those services in this case is MariaDB which takes an environment variable to specify the location of a file which contains the password for the DB user. The file must exist when the container starts or it will throw an error which could be difficult for a novice to diagnose. There is a default value which the file can contain but if I commit it to the public repository developers can no longer change it locally without the risk of the change being committed. When a developer builds a local environment from the repository, it's okay to use the default value and they shouldn't be required to create files in special locations before opening the container.
According to the documentation, inside devcontainer.json there is a property called initializeCommand which seems like it would be the place to run some code to copy a default file to the right location, but the developer could be running VS Code in Mac or on Windows. Is there a cross-platform way of making sure the environment is prepared before it gets launched? I want this automated so developers don't have to remember or know to do this before getting started.
Basically, it should automatically test if secrets/sql-password.txt exists and copy secrets/sql-default-password.txt there only if the file is missing.
Any suggestions?

Store in a docker package a builded/cmake result of libraries and dependencies and then add it to an existing Heroku app?

Our application is running on Heroku. It deals with a lot of assets stored on S3 and I'm trying to add the possibility to transform GLTFs into USDZs.
Long story short, to generate a USDZ you need to build some binaries from PixarAnimationStudios/USD.
As it takes 1h+ to initialize and you need a bunch of other libraries we created an EC2 instance on AWS and installed everything there. It's ok, but we would like to have it on Heroku.
We read about Docker Builds with heroku.yml but I'm not sure if this is what we actually need. We would need to "build and install" all the dependencies in a "package" and then have it deployed in our main API. The goal would be to "add this command" to our main API and be able to call it from bash.
My question would be: Can we store in a docker package a builded/cmake result of libraries and dependencies and then add it to an existing Heroku app?

Where are you supposed to store your docker config files?

I'm new to docker so I have a very simple question: Where do you put your config files?
Say you want to install mongodb. You install it but then you need to create/edit a file. I don't think they fit on github since they're used for deployment though it's not a bad place to store the files.
I was just wondering if docker had any support for storing such config files so you can add them as part of running an image.
Do you have to use swarms?
Typically you'll store the configuration files on the Docker host and then use volumes to bind mount your configuration files in the container. This allows you to separately manage the configuration file from the running containers. When you make a change to the configuration, you can just restart the container.
You can then use a configuration management tool like Salt, Puppet, or Chef to manage copying/storing the configuration file onto the Docker host. Things like passwords can be managed by the secrets capabilities of the tool. When set up this way, changing a configuration file just means you need to restart your container and not build a new image.
Yes, in most cases you definitely want to keep your Dockerfiles in version control. If your org (or you personally) use GitHub for this, that's fine, but stick them wherever your other repos are. One of the main ideas in DevOps is to treat infrastructure as code. In fact, one of the main benefits of something like a Dockerfile (or a chef cookbook, or a puppet file, etc) is that it is "used for deployment" but can also be version-controlled, meaningfully diffed, etc.

Configuring Docker multi-container App with Spring Cloud Server for OpenShift Origin

Stuck with few limitations to build an effective cluster based design for a distributed app (using Docker + OpenShift Origin)
To give a brief idea about my current architecture, we have multiple war and micro-services and all these apps following common approach to read property files from external folder (outside of war).
Example: /usr/local/share/appconfigs and my app refers from classpath.
We are using token based approach to generate these property files, based on environment. These files will be available in github.
To Dockerize our apps (war & services), I am building these properties first then copying them to catalina_base to make them available in classpath in Dockerfile.
Now to make my app so flexible and run multiple instances for different environments (example: DEV, INT, PREFIX) I am considering spring-cloud-config (server).
Brief summary,
Step 1) My externalized properties are built and available in github (Example: appconfigproperties)
Step 2) One Docker container runs with spring-cloud-server-config to serve property files based on profile key
Step 3) Run Docker App (war & other services) in another container using above properties.
Now Limitations here
I cannot use spring-cloud-config-client in my app, because it is not built on Spring-Boot. So I have left with only option that is REST based api to get properties
But I need the properties from one running container (which are served by spring-cloud-config-server), in another container App in its Dockerfile to copy to its catalina-base folder (so technically before app running).
If I want to run myty app in Dev or Int, I just need to run a container with few clicks, to make this distributed app completely configuration driven and on demand.
Appreciate your time to read and suggest possible changes to the solution if needed.

Version Control Vagrant and Ansible virtual box config with Rails app?

Just setting up a new Rails app and I have my Vagrant files along with a folder full of dev machine provisioning files for Ansible. These allow me to spin up a dev virtual machine, provision it and have everything up and running really quickly.
My question is, should all that be in my projects version control repository? I will be working on this project across several machines so have it accessible and synced would be useful but on the other hand I don't wish those items to be deployed when I finally deploy it to production? Also, having those files committed would keep a history of them which would also be nice.
What would you recommend?
This is very much a thing of your personal preference.
Some people keep everything in a single self-contained repo. Other people keep application code in a separate repo from their configuration/provisioning/deployment code.
Either way have their own benefits and drawbacks and there's no wrong of doing it as long as you do keep in some version control system.
When I set up new projects I create a directory structure along the lines of:
/<application_name>
./src
./deployment
./docs
Actual source code goes in src, any deployment-specific scripts (e.g. Ansible playbook dirs, Vagrant files) go in deployment and of course any documentation goes in docs.
Then I commit all this to source control. The deployment scripts are then written to be executed from their directory but change into the src directory to perform their actions.

Resources