I'm completely new to docker and ddev and I have just started learning things. My main purpose to use Docker & Ddev is to work on my CMS projects. However, I noticed that by default the docker gets installed in C drive (in my case its almost full). Therefore, I want to learn how to create my projects in D drive using DDEV.
For example I would like to have them organized in one single folder like
D://Myprojects\Drupalsites\Mysite1
Something like that.
How do I do that?
Your problem isn't DDEV and your projects, it's docker using up your space, at at least as far as I understand your question.
So what you're really wanting to do is to move your Docker WSL2 data distro over to the new drive. As far as I know Docker and WSL2 don't provide a simple way to do this, but these two links will tell you how you can do it.
https://dev.to/kimcuonthenet/move-docker-desktop-data-distro-out-of-system-drive-4cg2
How can I change the location of docker images when using Docker Desktop on WSL2 with Windows 10 Home?
I have not tried this.
One note though: Most people these days are doing the recommended thing and running DDEV inside WSL2 (in /home/<youruser>). But if you already have trouble with disk space with WSL2, you're going to have trouble with that as well. But WSL2 should be your future, see https://ddev.readthedocs.io/en/stable/#installation-or-upgrade-windows-wsl2
Related
I'm using docker desktop (4.X) over win10 pro. We are building Windows applications and using Windows containers.
On our setup, the folder C:\ProgramData\Docker(images/windowsfilter/tmp & co) can grow up to hundreds of GB, and i need to move this folder to an alternative location.
Again, i am using WINDOWS CONTAINERS (i do not care about wsl2 or hyper-v specific solutions)
I tried moving / creating a junction between
C:\ProgramData\ Docker => D:\DockerData, but windows containers backend does not start.
If i switch back to linux containers, everything is working fine (and i know how to move WSL2 vhdx, if needed, but again, i DO NOT NEED THAT information).
Moving HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\ProfileList\ProgramData location BEFORE installing docker desktop, works, but it is not an acceptable solution
I tried configuring data-root directory in %USERPROFILE%.docker\windows-daemon.json, But it does not work, windows containers backend does not start.
Please give me a reliable way to move the C:\ProgramData\Docker folder to another location.
Unfortunately, when utilizing Windows Containers, it is not yet feasible to relocate the C:ProgramDataDocker folder to another location. This is so that container images and other data can be stored in this directory, which the Docker for Windows service is hard-coded to utilize.
You might try using a symbolic link to reroute the C:ProgramDataDocker folder to an other place as a solution. This may not be a reliable approach, though, as the Docker for Windows service might not handle the symbolic link correctly, which would prevent the service from starting.
I have an aws ec2 account, where I am running couple of web apps on nginx. I don't know much about docker, except it is a container that takes snapshot of filesystem. Now, for some reason I am forced to switch accounts. I have opened a new aws ec2 account. Can I use docker to set up a container in my old virtual system, then get an image and deploy in my new system? This way I can remove the headache of having to install many components, configure nginx and all applications in my new system. Can I do that? If so, how?
According to the best practices of Docker and its CaaS, images are not supposed to "virtualize" a whole lot of services, on the contrary. Docker does not aim at taking a snapshot of the system (it uses FS overlay to create images, but theses are not snapshots).
So basically, if your (yet unclear) question is: "Can I virtualize my whole system into one image" the answer is: "No".
What you can do is using an image for each of your service (you'll find everything you need on the hub.docker) to keep a clean system on your new one.
Another solution would be to list all the installed Linux packages on your old system, and installed them on the new one and copy all the configuration files.
I am very new to Docker and currently trying to get my head around if there is any best practice guide to update software that runs inside a docker container in a very large distributed environment. I already found couple of posts around updating a MySQL database in docker, etc. It gives a good hint for any software that stores data, but what if you want to update other parts or your own software package or services that are distributed and used by several other docker images through docker-compose?
Is there someone with real life experience doing that in such an environment who can help me or other newbies to understand the best practices in docker if there are any.
Thanks for your help!
You never update software in a running container. You pull down a new version from the hub. If we assume you're using the latest tag (which is a bad idea, always pin your versions) of your image and it's one of the official library images or the publicly available that uses automated builds you'll get the latest version of the container image when you pull the image.
This assume you've also separated the data out of your container either as a host volume or using the data container pattern.
The container should be considered immutable, if you change it's state it's no longer a true version of the image.
I am trying to understand how docker can be used to dockerize multilayered application.
My tomcat application needs mongodb, mysql, redis, solr and rabbitmq. I am playing with Docker for couple of weeks now. I am able to install and use mongo/mysql containers. But I am not getting how can I completely ship application using Docker. I have few questions.
How should the images be. Should I have one image that has all the components installed or have separate images (like one for tomcat, one for mongo, one for mysql etc) and start those containers using a bash script outside of docker.
What is the docker way of maintaining multiple containers at once. Meaning say I have multiple containers (like mongo, mysql, tomcat etc...) that needs to be worked together to run my application, Is there any inbuilt way of dealing this so that one command/script does this?
Suppose I dockerize my application, how can i manage various routine tasks that need to be performed like incremental code deployment, database patches etc. Currently we are using vagrant, we also use fabric along with vagrant for various tasks.Like after vagrant up we use fab tasks for all kind of routine things like code deployment, db refresh, adding volumes, start/stop services etc. What would be the docker's way of doing this?
With Vagrant if VM crashes due to High CPU etc. host system is not affected. But I see docker is eating up lot of host resources. Can we put limits for that say not more than one cpu core for that container etc..?
Because we use vagrant, most of the questions above are in that context. When started with docker I thought docker as a kind of visualization technology that can be a replacement for our huge Vagrant based infra. Please correct me if I am wrong?
I advise you to look at docker-compose:
you'll be able to define an architecture of your application
you can then easily build it and run it (with one command)
pretty much same setup for dev and prod
For microservices, composition etc I won't repost on this.
For containet resource allocation:
Docker run has various resource control options (using google cgroups) see my gist here
https://gist.github.com/afolarin/15d12a476e40c173bf5f
I am totally new to Docker and have only, so far, used the images available on Docker repos.
I have already tested and being using docker for some aspects of my daily activities and it works great for me but in some specific cases I need a "virtual" image of Linux with graphic support(X in Ubuntu or CentOS) and so far I have only encountered on Docker repos images that by default don't have X support.
Should I in this case use a standard Virtual Box or VMWare image? Is it possible to run a visual version of Linux in a docker container? I haven't tried it yet.
If you run your containers in privileged mode they can access the host's resources (and to anything else for that matter), so in essence it is possible though I'd be willing to bet that it turns out to be more trouble than it's worth because the containers won't be quite as portable as ones that don't require such outside resources.