hardware requirements for PlasticSCM server - plasticscm

I'm evaluating PlasticSCM on a VMWare Machine with 4GB RAM and 4Core CPU. Since I've ported our trunk into the server (about 6GB of Data), the service ran out of memory (started swapping). I've increased the the VM RAM to 6GB This is actually more than I'd like to load the host system with, since I've also got VMs for PlasticSCM Client, TeamCity Server, TeamCity Agent.
I was trying to find a spec with details on hardware requirement for running PlasticSCM server which incorporates scaling. So far, I've only found the minimum requirement (512MB RAM etc.) and the system information of your heavy load and scale test. As far as I can see, it's all about RAM. :)
Anyway is there a detailed spec with recommendations for the hardware being used?
P.S.: Of course, in case of switching to Plastic we'd run the service on a real machine instead of VM.

Related

Why is ruby on rails testing super slow in linux?

I have reviewed blogpost from 2008 to date. I have Inherited a ruby ​​on rails project for which I need to increase the test code.
I work on a laptop asus computer with an 8gen cpu i7U with 16gb ram and a 512gb ssd.
Initially I was running ubuntu 19.10, when I started the project and with about 1200 tests. it takes more than 1hr to run. Whereas on a 2015 macbook pro with 8gb of ram and an hdd, it takes only 2-3 min.
The log / test.log does not report errors, the tests do not hang, but waiting too long is not efficient, especially when i'll be increasing the number of tests.
So I Uninstall ubuntu, wipe off the ssd, install solus, arch and ubuntu, with the same setup for all through asdf as version manager and in no distro the time is less than 1hr.
Does anyone know why this happens in linux? The mac setup is also through asdf and it is fast enough.
Without knowing the specifics of the codebase or the tests, this question is equivalent to "how long is a piece of string."
There are many differences between linux and macOS. Cryptographic libraries may have different defaults. Memory limits for threads will be different. Memory limits for processors may be different.
Unless you can isolate specific tests which are wildly different and extrapolate from there, it's almost certainly going to come down to OS-level differences.

Docker on Windows in Production

I've been asked to research Docker. The question that I cannot get a definitive response to is "can you run Docker on Windows in production?".
I keep seeing "Docker image containers can run natively on Linux and Windows. However, Windows images can run
only on Windows hosts and Linux images can run only on Linux hosts, meaning a host server or a VM."
I'm not interested in running containerized windows applications (.net). We have Spring Boot (java) applications & are creating a microservices architecture. These containerized apps. don't need an OS running in the same container.
We also need an orchestration engine like Kubernetes and its unclear if this is something that can run in production on windows either.
I've been fighting the good fight trying to get deployment environments switched to Linux but that's a loosing battle at this point.
Citing the docs:
Welcome to Docker for Windows!
Docker is a full development platform for creating containerized apps,
and Docker for Windows is the best way to get started with Docker on
Windows systems.
Take this literally. It's meant by the vendor as a dev tool to develop your Docker environment on Windows, not a production environment. To run it in production, they expect a Linux host.
It's not clear if the OP is asking "Can I run Docker on Windows in production" (like from a licensing perspective), or "should I run it" (like from an experience perspective.) I have an answer that should address both points.
It's indeed interesting to note first that as I write this, all the answers and comments so far are from 2018 or (like the question) 2017.
Here's at least one 2019 post on the topic from Docker (including listing clients running in production, so it addresses both points):
https://www.docker.com/blog/5-reasons-to-containerize-production-windows-apps-on-docker-enterprise/
And while the title refers to Docker Enterprise, the article does say "Hundreds of enterprises now run Windows container nodes in production", without that Enterprise caveat.
Even so, folks who may "not want to pay to run Docker Enterprise" should note that Windows 2016 and 2019 include a license of Docker Enterprise, free. (As for the recent upheaval of Docker where the Enterprise product was sold to Mirantis, there's no indication yet that will change the included Windows licensing of it.)
Still, I realize that the OP and other readers may seek still more (documented) evidence of production Docker deployment on Windows. I'll leave that for others to elaborate. Just didn't think this should stand here without anything more recent than July 2018.
Check out this blog, it quotes "Windows Server 2016 is the where Docker Windows containers should be deployed for production".
First of all, I suspect this question is rather stale after 3 years. I don't know if you are still struggling with the problem, I would love to hear your experience and the route you had taken.
This is probably a biased answer but I will try to answer with my experience. Like you, we have also lost the good fight to persuade our client to use a Linux server. We have 2 metals and a small bunch of virtual machines running Windows Server 2019 - Server version 1809 (Which is not the cutting edge, but the most recent stable version) It was indeed an improvement on top of WS2016. However it still had some problems. The major problem was with the docker swarm. The overlay networking with routing mesh was not working properly. So we had to fall back to containers with docker-compose and manual service discovery, which kinda beats the purpose of docker.
That being said, the problem with the Swarm network could be because of the fact that we are using virtual machines and Hyper-v switches. On top of that, we had no direct access to host network and we had to jump through some bureaucratic hoops whenever we require changes in the network, which got super old super fast when we want to test stuff. Additionally, we did not have Active Directory because of our lack of confidence to network. I am still not sure if DCs would play with docker in a virtual environment. Still not having AD was manageable since we did not have many machines.
Another problem was that we did not have nested virtualization (i.e. cannot run moby) due to CPU issues, and WSL2 with support for Docker is not available on WS19 LTSC editions. So I had to write our own images for many stuff that we use. Like Jenkins, Redis, etc. You can find the dockerfiles here if interested But obviously keeping them up to date and tidy was a huge work and I did not have much time to invest.
Performance-wise, we seemed to have no issues but we did not really make a comparative analysis.
All in all, I love Docker, it is a great product. But after this project I am thinking not to touch Docker on windows in a production environment with a 10 foot pole. In fact, I don't know if I will ever use a Windows machine as production environment in the future. It is cool to have it in development though.
My understanding is that Containers on Windows Server should be fine for production while Containers on Windows Desktop should be only for dev and test, not production. I saw one post "MICROSOFT SOFTWARE SUPPLEMENTAL LICENSE FOR WINDOWS CONTAINER BASE IMAGE" https://learn.microsoft.com/en-us/virtualization/windowscontainers/images-eula, not sure how this one is related to this question?
I can highly recommend not to use Docker Desktop for Windows in Production.
The host machines (Windows 10 Pro) where configured to restart everyday at a certain time and the Docker Container where Linux containers which as recommended where using the WSL2 based engine.
I was testing it on 20 devices for a over one year now and from 20 Pc's had at least 5 now the problem that Docker Desktop cannot be initialized. Which means that Docker Desktop is not starting anymore until you remove some folder(s) in %APPDATA% but when you do so it worked only 3 out of 5 times for me that Docker Desktop was able to start after that. One time needed to reinstall Docker Desktop the remaining One I needed to reload all the docker images and configure them again. Most of the issues seemed to have been a result of a power cut.
Most annoying are the updates from my perspective, because from one to the other version all images and running containers where gone and I needed to reconfigure them, happend with 2 Version in the past but not on all computer.
The Linux machines on the other hand no issues.

Docker not releasing memory when shutdown, windows 10

I have recently started using docker for new development work, however I am still required to switch back to working on our older on-premise offering from time to time. That is, I sometimes need to shutdown docker and spin up a an installation of our on premise server.
I find that when I do this with docker installed the performance of this server is terrible, essentially unusable, I need to uninstall docker to get it to work again.
When I have docker running I can see it using the memory (my machine has 32 GB of RAM, I am telling docker to use 16) and when I shutdown docker I can see it being released, according to the task manager anyway, and I can also see on hyper-v manager that the VM has been shutdown. However the performance of on-premise server install continues to act as the memory is in use. This is not a small performance hit, actions that should take 1 second take 20 or 30.
It would seem like docker is not actually releasing the memory on shutdown and only does so when I actually uninstall it, when I do this performance recovers completely.
Is this a known issue? Is there anything else I can try to see where the memory is going? I can find no other reports about it.
I am using windows 10 with docker version 17.03.1-ce-win5 (10743)

aws memory high usage

Recently I configured my instance into a micro environment in EC2 with glassfish and mysql in windows..
I deployed my war and i was able to access my site through http.
I changed my application and redeployed the war and it also worked.
When I was about to redeploy the war for 4th or 5th time, the application got deployed, I saw the message in the log file. But I was unable to access the site through http.
Then I tried the command "asadmin list-applications" and I got the following message.
Error occurred during initialization of VM
Could not reserve enough space for object heap
After that I was not able to connect to my instance through RDP and I had to reboot, I was able to access it again after that. I started the servers again (glassfish mysql), but no luck.
I noticed that the memory usage is around 90% or more. CPU isage is low.
now I can not access my site through http. what shall i do ?
Thanks in advance !
Honestly, there are a couple issues working against you here:
1) Windows requires FAR more RAM than Ubuntu to run at a minimum decent level.
2) GlassFish has a much larger footprint than Tomcat or Jetty.
Is there any particular reason you need Windows? Like is there a specific need that your server run some executables for file processing or something like that outside the JVM? Most would agree that Linux (Ubuntu or other) will give you much better results in performance and stability to run an App Server like GlassFish in any environment.

Who loads the code in BIOS during booting?

I am studying the boot process in Linux. I am looking through this html page http://www.tldp.org/HOWTO/Bootdisk-HOWTO/x88.html. The first line under the section 3.1 "The boot process" says that "All PC systems start the boot process by executing code in ROM (specifically, the BIOS)".
My doubts are
Who loads the code in BIOS ?
Where is this code in BIOS located ?
To where is the code in BIOS loaded and executed ?
Kindly tell me references where i can get more information
Thanks,
LinuxPenseur
The code is already there in memory when the computer is powered on. It is in non volatile memory, meaning it doesn't disappear when the computer is turned off.
So the code is already there in a specific memory address, and the processor starts by running it.
More info here
A good question! Actually you do not need to reformat the HDD or even reinstall the OS on it unless the new PC is unable to run the existing OS on the drive.
Commonly, if you did a simple install of a Linux distribution, you would have no trouble moving the HDD to a new system and just running it. But if the OS is a version of Windows, the chances of this being the case are nearly zero: hardware vendors nearly always tune their device drivers for Windows so you cannot even use the same driver for two versions of Windows on the same machine (upgrading from XP to Windows 7 for example, often requires that you redownload at least a few hardware drivers).
And the problem often arises even with Linux if you have installed any high performance drivers. Sometimes you can perform a "recovery boot" from GRUB or LILO and get into a text mode screen with internet access, though. And if you can do that, often you can install the drivers for the new PC on the Linux HDD without doing a complete reinstall of Linux.
In fact, this is actually what that install CD or DVD is actually doing. It boots to a very vanilla flavor of the OS (Windows or Linux), then installs drivers for the hardware it detects, reboots (hopefully with functioning drivers) and wraps up the install process.

Resources