I am currently working on the topic of ci/cd. In one tutorial it is said that it is not desirable to create a "snowflake system". What is the meaning of this?
In the previous video (thanks for finding it) he defines the term: A snowflake system is a group of servers that are allegedly equal, but in reality are not. Because both the servers and the installed software is maintained and updated manually, their software starts to divert (e.g. some installation failed, some server was forgotten in a deployment round).
Snowflakes look identical from a distance, but they're never equal.
The concept (or bad practice, in this case) of manually deploying is of course not limited to the Windows command xcopy. Using linux equivalents doesn't help.
Ok. Thanks to #PMF. PMF gave me the idea to watch a previous tutorial of this and so I found out what a snowflake server is. It's about the context of the deployment to different servers. And if you do the deployment with just copying files, there can be problems. Because each server is always unique. Like snowflakes in nature. They look the same but in detail they are different or unique.
So whoever is interested or if someone has the same question in the future. This is it! :-)
Related
Iām just browsing through the docs+code and I have a quick question: do you see Wolkenkit, or any of its components, working in a serverless environment? either now or in the future
Short answer
Unfortunately no.
Long answer
Unfortunately no, if we are talking about now. wolkenkit is very strict in separating your domain code from the technical infrastructure code that is required to run your domain code. Right now this technical infrastructure code is focused on making use of Docker containers, as this allows you to work not only in the cloud, but also locally, or in a classic data-center, or ā¦ you name it š
Of course it would be technically feasible and reasonable, to have another type of runtime in the future, that does not make use of Docker containers, but instead would work with some kind of FaaS solution. As the native web (the company behind wolkenkit) is a small company, we need to decide what to focus on, and unfortunately, at least right now this is not on the roadmap for the close future. I do not say that this will never be done ā it just will take time. Maybe also someone else comes up with such a runtime and enhances the wolkenkit ecosystem.
So, to cut a long story short, if now, then the answer is no. If we're talking about the future, the answer is possible, but without an ETA.
PS: I am one of the developers of wolkenkit, so please take my answer with a grain of salt.
everyone.
This is my very first question here, so I apologize in advance for any violations of protocol.
We have recently moved to VSO/TFS for source control of our code, and all of us have multiple computers we work on, the usual desktop/laptop combination.
I would like to know what the proper and best practice is for switching computers mid-work. We have been using the Shelve option, but it seems cumbersome and not exactly meant to be used in this way, when all that needs to happen is the switching of computers.
I have personally tried mapping my workspaces to 3 of the major cloud-storage solutions, Dropbox, Box, and OneDrive for Business. All 3 have problems with "perpetual" syncing and slow syncing. What I mean is that they cannot seem to handle the constant file changing that is happening when Visual Studio is happening, and these services don't provide good options for syncing only certain folders.
We really want to keep the workspace mapping in its default location and use the VSO/TFS system to move the code around the machines, but obviously without checking in incomplete work.
Is Shelving the correct practice for this scenario?
Many thanks.
Yes, shelving was intended to handle this type of scenario.
Another option is to use a Git-backed Team Project; you can then work in a local branch and publish that local branch if you need to swap PCs.
Ideally, however, regardless of the type of version control you're using, you should make small, frequent commits to source control as "checkpoints".
do you have any experience with SD Erlang project?
There seems to be implemented many interesting concepts regarding the comm mesh optimalizations and I'm just curious if some of you used those in production already or in some real project at least.
SD erlang repo
Thanks!
The project has finished a week ago. The main ideas behind SD Erlang are reducing the number of connections Erlang nodes maintain while keeping transitivity and common namespace for groups of nodes. Benchmarks that we used (Orbit, Ant Colony Optimization (ACO), and Instant Messenger) showed very promising results. Unfortunately, we didn't have enough human resources to refactor Sim-Diasca simulation engine. So, no, SD Erlang hasn't been used yet in a real application.
At the moment we are writing up the last deliverable that will provide an overview of what has been achieved. It will appear here in a few weeks (D6.2). In general we are happy with the results we get using SD Erlang, so there are plans for a follow up project to continue to work on it but currently this is work in progress.
This is not a direct answer but I will use SD-Erlang in a embedded application which needs to scale to hundreds of nodes (small embedded CPUs). From what I have seen its ready to be tried out in a real application. To furtehr evaluate lets consider the alternatives:
You have only a few distributed nodes: then you probably don't need it and can just connect all the nodes and for name registry use either the global module (slow but sturdy) or gprocwith the new locks_leader branch which avoids the quite broken gen_leader which so far prevented using gproc in distributed mode in production.
You need many nodes (how many depends on your hardware and requirements but you start to get into interesting territory with > 70 nodes)
Use SD-Erlang and fix whatever problems you encounter in production, or at least report them. It certainly solves a lot of the problems you get with normal Erlang distribution
Roll your own solution either with playing with different cookie values or with hidden nodes: hint you can set different cookie values for different peer nodes. But then you need to roll your own global name registry and management code: looks like a variant of Greenspuns 10th rule or closer to Erlang Virdings 1st rule : you probably will result in implementing half of SD Erlang yourself.
Don't use Erlang distribution at all. That seems to be the industry standard that for anything involving more nodes or crossing data-centers you shouldn't use Erlang distribution at all but run your own protocols. My personal opinion is to rather fix Erlang Distributions problems than just ditch it. Its much too useful and time saving when it works for a use case to just give up on it. And I see SD-Erlang as being the fix for the "too many nodes" problem, its at least the right starting point.
I am trying to ilustrate the concept of distributed applications using Erlang. My system currently has one server and one ATM. I try to keep it as simple as possible.
For the moment my application runs locally. I am using gen_server for a client-server relationship between the banking server and the ATM. I also have a gen_fsm module to suggest different states my ATM has. In order to store any data i use the dict module (I don't want to make things more complicated using databases). To keep processes alive (the gen_server and gen_fsm) I am using a supervisor process. I've wrapped all modules as an application but for the moment it's all local. Any ideas would be highly appreciated.
I was thinking to start the same application on two different nodes and to illustrate the distributed concept to use some kind of failover/takeover mechanism but I have no ideas on what modules to use.
Is it mandatory to use target systems?(at some point i must do a hot upgrade to the application)
What's the correct order to do these things: first to upgrade and then distribute?
I would be very grateful if someone could give me some ideas on how to accomplish all those things.
I never tried it myself, but docs seem to point to:
Erlang Release Handling (11.3 Distributed Systems).
It's a really short paragraph showing the sync_nodes command, I suggest you to read the whole chapter because I noticed that often the concepts of concurrency and distribution are so persavive in Erlang that problems like yours have already been solved and included in OTP.
BTW, Erlang user guide also has a whole chapter dedicated to Distributed Applications that seems related to distributed applications config options, I think that the two should do the trick.
Hope this helps, if you need more help just ask!
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I am looking at Erlang for a future version of a distributed soft-real-time hosted web-based telephony app (i.e. Erlang looks like absolutely the perfect choice for this kind of app). I come from a .NET background and the current version of this app uses a combination of C#, WCF and JQuery to deliver the service. I now need Erlang to allow me to add extra 9s to my up-time and to allow me to get more bang for my server bucks.
Previously I'd set up a development process here combining VS.NET, GIT, TeamCity and auto-deployment of MSI files to the various environments we maintain. It's not perfect, but we're all now pretty comfortable with it. I'm wondering whether a process like we have is even appropriate for such a radically different technology stack (LYME)?
I'm confident that all of the programming challenges we previously solved using .NET can be better solved in less code with Erlang, so I'm completely sold on the language choice. What I don't yet understand from reading the Pragmatic and O'Reilly books on Erlang, is how I should adapt my software engineering and application life-cycle management (ALM) processes to suit the new platform. I see that in-place code updates could make my (and my testing and ops team's) life much easier (compared to the god-awful misery of trying to deploy MSI files across a windows network) but I am not sure how things should change when I use Erlang.
How would you:
do continuous integration in Erlang (is it commonly used?)
use it during a QA cycle (we often run concurrent topic branches using GIT, that get their own mini-QA cycle, so they all get deployed into a test environment)
build and distribute your code to DEV, TEST, UAT, STAGING, and PROD environments
integrate code generation phases into your build cycle (we currently use MSBUILD + T4 templates)
centralize logging for a bunch of different servers (we currently use Log4Net, MSMQ, etc)
do alerting with tools like SCOM
determine whether someone/something has misconfigured your production servers
allow production hot-fixes only after adequate QA (only by authorized personnel)
profile the performance (computation and communication) of your apps
interact with windows-based active directory servers
I guess I need to know what worked for you and why! What tools and frameworks did you use? What did you try that failed? What would you do differently if you could start over, knowing what you know now?
Whoa, what a long post. First, you should be aware that the 99.9% and better kool-aid is a bit dangerous to drink while blind. Yes, you can get some astounding stability figures, but you need to write your program in a way facilitating this. It does not come for free. It does not happen by magic either. Your application must be designed in a way such that other subsystems recover. OTP will help you a lot - but it still takes time to learn.
Continuous integration: Easily done. If you can call rebar or make through your build-bot you are probably set here already. Look into eunit, cover and Erlang QuickCheck (the mini variant is free for starters) - all can be run from rebar.
QA Cycle: I have not had any problems here. Again, if using rebar you can build embedded releases that are minimized erlang vm's you can copy anywhere and run (they are self-contained). You can even hot deploy fixes to such a system pretty easily by altering the code path a bit so you have an overlay of newer fixes. Your options are numerous. Git already help you here a lot.
Environmentalization: Easily done.
Logging centralization: Look into SASL and the error_logger. You can do anything you want here.
Alerting: The system can be probed for all you need (introspection is strong in Erlang). But you might have to code a bit to hook it up to the system of your choice.
Misconfiguration: Configuration files are Erlang terms. If it can be computed, it can be done.
Security: Limit who has access. It is a people problem, not a technical one in my opinion.
Profiling: cprof, cover, eprof, fprof, instrument + a couple of distributed systems for doing the same. Random sampling is also easy (introspection is strong in Erlang).
Windows interaction: Dunno. (Bias: last time I used windows professionally was in 1998 or so).
Some personal observations:
Your largest problem might end up being that you try to cram Erlang into your existing process and it might resist. It is a new environment, so new approaches will be needed in places and you should expect to adapt and workaround limitations you find along the way. The general consensus is that it can work (it is working for several big sites).
It looks like you have a well-established and strict process. How much is that process allowed to be sacrificed to give way to a new kind of thinking?
Are your programmers willing to throw out almost all of their OO knowledge? If not, you will end with a social problem rather than a technical one. If they are like me however, they will cheer, clap in their hands and get a constant high by working with an interesting language solving an interesting problem in a new way.
How many Erlang-experienced programmers do you have? If you have rather few, then better cut your teeth on some smaller subsystems first and then work towards the larger goal. Getting the full benefit of the system takes months if not years. Getting partial benefit can be had in weeks though.