Multiple docker-compose .dcproj in a Visual Studio solution - docker

To develop some boilerplate code for a dockerized solution, I want to have multiple docker-compose .dcproj projects in the same solution.
Lets say:
shared-docker-compose.dcproj - Shared services for interop, like a MQ
example-docker-compose.dcproj - Example services for testing/playground for the boilerplate.
app1-docker-compose.dcproj - The actual services for scenario 1
app#...
This should keep the velocity in development high if a part of solution needs an update. And the developer is free to consider which part of the whole solution he wants to take a look at.
Okay - not really a problem to create this project/compose files. No problem with docker, docker-compose, or any other part of the solution. Except the IDE.
Visual Studio 2017 truely struggles with multiple dcproj-files.
The behavior is different based on the order of dcproj in the .sln-file.
It takes the first found dcproj.
Unloading all other dcproj helps. But I'm not sure if this is stable.
Even if it starts, there's no guarantee the debugger tracking the requested composition. E.g. Starting dcproj#3, the debugger listens to dcproj#2.
If the first found dcproj is just plain image distributor (Like start MQ, no code, thus no 'build:'), then the compose gets up but the dev is dumped in silence.
Strangely, when there are multiple dcproj-files in the solution and you Add -> Docker Support you can choose which of them.
Or in a short: Not usable without a strong explanation to all developers. It even took a long time to get all those responsibility fragments together.
So, here's my question:
Is there any way for such a composition that works with Visual Studios .dcproj too?
Are there any recommendations for such a common scenario?
At the moment, the only way I've found was to split the solutions where the only difference is which docker-compose is included. Thank you for any ideas and for sharing your practices.
Btw, I'd like to add a dcproj-file tag. :)

Related

What does "snowflake system" mean?

I am currently working on the topic of ci/cd. In one tutorial it is said that it is not desirable to create a "snowflake system". What is the meaning of this?
In the previous video (thanks for finding it) he defines the term: A snowflake system is a group of servers that are allegedly equal, but in reality are not. Because both the servers and the installed software is maintained and updated manually, their software starts to divert (e.g. some installation failed, some server was forgotten in a deployment round).
Snowflakes look identical from a distance, but they're never equal.
The concept (or bad practice, in this case) of manually deploying is of course not limited to the Windows command xcopy. Using linux equivalents doesn't help.
Ok. Thanks to #PMF. PMF gave me the idea to watch a previous tutorial of this and so I found out what a snowflake server is. It's about the context of the deployment to different servers. And if you do the deployment with just copying files, there can be problems. Because each server is always unique. Like snowflakes in nature. They look the same but in detail they are different or unique.
So whoever is interested or if someone has the same question in the future. This is it! :-)

Is it feasible to have one docker image for an already existing application with multiple dependencies

I am new to Docker and want to learn the ropes with real-life challenges.
I have an application hosted on IIS and has dependencies over SQL Express and SOLR.
I want to understand the following:
Is it possible to have my whole set-up, including of enabling IIS,
SQL, SOLR and my application in one single container?
If point 1 is feasible, how should I start with it?
Sorry if my questions are basics.
It is feasible, just not a good practice. You want to isolate the software stack to improve the mantainability (easier to deploy updates), modularity (you can reuse a certain component in a different project and even have multiple projects reusing the same image) and security (a software vulnerability in a component of the stack will hardly be able to reach a different component).
So, instead of putting all together into the same image, I do recommend using Docker Compose to have multiple images for each component of the stack (you can even pull generic, up-to-date images from Docker Hub) and assemble them up from the Compose file, so with a single command you can fire up all the components needed for your application to work.
That being said, it is feasible to have all the stack together into the same Dockerfile, but it will be an important mess. You'll need a Dockerfile that installs all the software required, which will make it bulky and hard to mantain. If you're really up for this, you'll have to start from a basic OS image (maybe Windows Server Core IIS) and from there start installing all the other software manually. If there are Dockerfiles for the other components you need to install and they share the same base image or a compatible one, you can straight copy-paste the contents into your Dockerfile, at the cost of said mantainability.
Also, you should definitely use volumes to keep your data safe, especially if you take this monolithic approach, since you risk losing data from the database otherwise.
TL;DR: yes, you can, but you really don't want to since there are much better alternatives that are almost as hard.

Can Wolkenkit be deployed in a serverless environment

Iā€™m just browsing through the docs+code and I have a quick question: do you see Wolkenkit, or any of its components, working in a serverless environment? either now or in the future
Short answer
Unfortunately no.
Long answer
Unfortunately no, if we are talking about now. wolkenkit is very strict in separating your domain code from the technical infrastructure code that is required to run your domain code. Right now this technical infrastructure code is focused on making use of Docker containers, as this allows you to work not only in the cloud, but also locally, or in a classic data-center, or ā€¦ you name it šŸ˜‰
Of course it would be technically feasible and reasonable, to have another type of runtime in the future, that does not make use of Docker containers, but instead would work with some kind of FaaS solution. As the native web (the company behind wolkenkit) is a small company, we need to decide what to focus on, and unfortunately, at least right now this is not on the roadmap for the close future. I do not say that this will never be done ā€“ it just will take time. Maybe also someone else comes up with such a runtime and enhances the wolkenkit ecosystem.
So, to cut a long story short, if now, then the answer is no. If we're talking about the future, the answer is possible, but without an ETA.
PS: I am one of the developers of wolkenkit, so please take my answer with a grain of salt.

Using multiple computers (per dev) and Visual Studio Online

everyone.
This is my very first question here, so I apologize in advance for any violations of protocol.
We have recently moved to VSO/TFS for source control of our code, and all of us have multiple computers we work on, the usual desktop/laptop combination.
I would like to know what the proper and best practice is for switching computers mid-work. We have been using the Shelve option, but it seems cumbersome and not exactly meant to be used in this way, when all that needs to happen is the switching of computers.
I have personally tried mapping my workspaces to 3 of the major cloud-storage solutions, Dropbox, Box, and OneDrive for Business. All 3 have problems with "perpetual" syncing and slow syncing. What I mean is that they cannot seem to handle the constant file changing that is happening when Visual Studio is happening, and these services don't provide good options for syncing only certain folders.
We really want to keep the workspace mapping in its default location and use the VSO/TFS system to move the code around the machines, but obviously without checking in incomplete work.
Is Shelving the correct practice for this scenario?
Many thanks.
Yes, shelving was intended to handle this type of scenario.
Another option is to use a Git-backed Team Project; you can then work in a local branch and publish that local branch if you need to swap PCs.
Ideally, however, regardless of the type of version control you're using, you should make small, frequent commits to source control as "checkpoints".

What's going on inside my windows service?

I have a fairly complex windows service (written in .net 4) with several sub systems that run in parallel.
I have implemented pretty good logging throughout, but I'm feeling I'm needing more info about what each subsystem is currently doing. This would be very useful for times that I need to stop the service for upgrade/bug fixes.
It would be nice to have a gui app that will show me the status for each part of the application that I'm interested in. I've had some ideas for how I'm going to do this, but I'd like to hear some others' ideas as well.
I'm interested in a solution that would be easy to plop down in a future windows service and I'm not looking for anything very complex.
Are there any tools for this sort of thing?
Have you done this yourself?
What about interprocess communication?
Since Windows services can no longer interact with the user session, you'll need to have a separate application that does the interacting for you. Based on the details of your question, I think you understand this.
The big question is how to facilitate the communication between your Windows service and the application. There are all kinds of approaches - shared memory, socket, pipe, remoting, etc. What I have used successfully is WCF. If your UI is going to reside on the same machine as the service, use the NetNamedPipeBinding. If you ever need access from a remote machine, you can change to the NetTcpBinding. I've found this flow chart helpful in binding selection.
.
If you're looking for a more formal framework approach that just straight WCF, have a look at Juval Lowy's Publish-Subscribe WCF Framework, which is described in pretty good detail in this MSDN article. The code is available to look at via the article, or you can download the source and example from Lowy's website here. Go to the Downloads section, filter by the Discovery category, and you'll see it there.

Resources