It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 11 years ago.
Is there someone who had the experience of using all the three techs?
I found that murder is a twitter's open source project for code deploys,
it uses BitTorrent to reach a high speed of distributing files
but puppet and chef are also used for software distribution.
can any one give a brief introduction to this three?
Disclaimer: I am one of the Puppet developers.
Murder is a file distribution strategy: it is really, really good at getting files (and especially large files) to a whole lot of machines really fast. It integrates with other tools, like capistrano, for actually taking action to do something beyond copy files around.
Both Puppet and Chef are, at this level, almost identical: they are both tools that take a description of how the machine should be, and then turn that into actions to make it so.
You can deploy files with them both, but they are very much classic HTTP or rsync style "copy the file to here" tools. They don't implement any P2P data transfer optimization or anything like that at this stage.
So, they can both do way more than murder, but they are much less good at "get this file on 10,000 machines", and much better at "make this machine the way it should be".
You would use murder in conjunction with some other deployment strategy, and Puppet or Chef might form part of that - but neither would replace the other.
Related
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
How to publish or deploy .NET website without delays?
I have look for websites that are build over .NET technologies, like StackOverflow, and I never saw them down because an update is being made. For my web sites, I got down time in minors updates, like correcting some bugs on my Controllers (I use MVC), not by doing something bigger like database or server movements.
So how can I prevent site loading delays due to ASP.NET Startup?
I know that it require "ASP.NET worker process" to compile the code, but how can I prevent the traffic issues.
The typical way of doing this is creating a farm of multiple web servers and, when you want to update the site, take each server offline and update them one at a time. Investigate How to create an ASP.NET web farm?.
It's all about the server setup. Most companies have at least 2-3 server: 2 Frontend servers and a backend server. Then you can take one of the frontend servers down, point all traffic to the other one, upgrade the server which is down, and then take that server up, and continue the process with the other.
In some cases you would use load balancing to make sure the two servers are synchronized.
You will notice that sometimes you cannot post here on Stackoverflow because of updates. That's because the database is a shared server, which "can be down", if you need an upgrade. Yes, there are also options against this, but it can gives problems if everyone writes to the database while the site is up, hence the "readonly mode" from time to time.
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
I just started learning Erlang. My task to write a simple script for testing web applications. I hasn't found work script in the Internet, and Tsung too bulky for such a task. Is anyone can help me (give working example of script or link where I can found it)?
What would be possible to specify a URL, and concurrency, and time of testing and get the results. Thanks.
This links not help:
http://effectiveqa.blogspot.com/2009/12/minimal-erlang-script-for-load-testing.html
(not working, function example/0 undefined )
http://www.metabrew.com/article/a-million-user-comet-application-with-mochiweb-part-1
(work for socket, but I need concurrent testing)
I use for such purposes basho bench. It not so hard to start with it and add your own cases. Also it contains script, which draw all results.
Would like to build one? I would not recommend that way (because I have tried and there are so many things to consider to build one, especially spawning many processes and collecting the result back)
As you already know, I would recommend tsung, although it is bulky, it is a full load test application. I have gave up mine, and went back to tsung because could not properly handle opening/closing sockets with too many processes.
If you really want a simple one, I would use httperf. AFAKI, it works fine with single machine with multiple processes.
http://agiletesting.blogspot.ca/2005/04/http-performance-testing-with-httperf.html
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 11 years ago.
I am doing a project on motion estimation between two frames of a video sequence using Block Matching Algorithm and using SAD metrics. It involves computing SAD between each block of reference frame and each block of a candidate frame in window size to get the motion vector between the two frames.
I want to implement the same using Map Reduce. Splitting the frames in key-value pairs, but am not able to figure out the logic because everywhere I see I find the wordCount or query search problem which is not analogus to mine
I would also appreciate If you are able to provide me more Map Reduce examples.
Hadoop is being used in situations where computations can happen in parallel and using a single machine might take a lot of time for the processing. There is nothing stopping you using Hadoop for video processing. Check this and this for more information on where Hadoop can be used. Some of these are related to video processing.
Start with understanding the WordCount example and Hadoop in general. Run the example on Hadoop. And then work from there. Would also suggest to buy the Hadoop - The Definitive Guide book. Hadoop and its ecosystem is changing at a very fast pace and it's tough to keep up-to-date, but the book will definitely give you a start on Hadoop.
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 11 years ago.
what are the best practices to process images in enterptices web applications.
I mean
storing
assign with entity
fast loading/caching
delayed / ajax loading
suitable format (png, jpeg)
on fly editing (resizing, compress)
free libs/helpers
image watermarking/copyrighting on fly
Especially, appreciated already production approaches!
As always, every project has their own requirements, restrictions and resources (The 3Rs). There is no 'super pattern' or 'one size fits all' method.
We cannot tell you how to implement you project as every project is different. It's up to you to use your skills/knowledge and experience to make informed decisions on implementation.
The 'best practice' is to individually research and learn each of the technologies/methods you have listed and gain the knowledge to know when to use them based on your projects requirements, restrictions and resources.
I use ImageMagickObject in my mvc projects. It can:
suitable format (png, jpeg)
on flyediting (resizing, compress)
freelibs/helpers image
watermarking/copyrighting on fly
fast loading/caching: may be memcached?
delayed / ajax loading: jquery is a good solution
assign with entity: Entity Framework can work with almost all databases
storing: hard question. all depend to the functionality
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 13 years ago.
I've been learning Groovy & Grails recently, and in terms of developer productivity it seems to be light years ahead of other Java solutions (Spring, Struts, EJB, JSF). If I search monster.ca, for either Groovy or Grails, 0 matches are returned, which suggest Grails isn't doing too well in terms of adoption.
I realise that:
Grails is relatively new and adoption takes time
Success of a technology depends on more than just it's technical merits (e.g. marketing $)
Search results on monster.ca are at best a very rough proxy for global adoption. It's possible that lots of people are using it, just not in Canada, or Canadian companies that are using it simply aren't hiring at the moment
Are there other reasons why it hasn't been adopted to the extent it seems to "deserve"?
There are probably more people using Grails than you think. Job boards show you what are the skills people are looking for. Grails is fairly new and there are not a lot of people experienced with it out on the job market.
Grails and in particular Groovy are very close to Java. A few quick lessons in Groovy and a Java developer and quickly feel at home. You can very easily take a vanilla Java developer posting and put that person on into a position developing with grails.
I would say that you will see more Groovy/Grails postings in the future as more Java shops adopt these technologies.