Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
I was restoring a MongoDB environment, then it failed for no space in disk.
After that I cannot execute any docker-compose command, in each attempt this error message is displayed:
Failed to write all bytes for _bisect.so
I found some references about to free space in /tmp, although I want to be sure that was the best alternative of solution.
Remove the docker images:
docker rmi $(docker images -f dangling=true -q)
UPDATE:
you can now use prune
docker system prune -af
https://docs.docker.com/engine/reference/commandline/system_prune/
check df
normally you will find 100% for /var/lib/docker and 100% for/
try to free some space, may be stop syslog service.
Then remove and restart your containers
recheck df
now /var/lib/docker should be around 15%
During a docker-compose command, I got a similar error ("Failed to write all bytes for _ctypes.pyd") because my drive had no space left on it.
Related
This question already has answers here:
Secure Docker Image from Being Copied or Encrypt Docker Image Contents
(1 answer)
Is distributing python source code in Docker secure?
(4 answers)
Closed 1 year ago.
I know there are several ways to enter a running container,
docker attach container_id
nsenter
docker exec -it container_id /bin/bash
now my question is I want to distribute the container of mine, but I don't want to expose the source code which in container.
is there any solution of this? or any good suggestions to achieve the goal?
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
In case multiple containers are running then deleting one by one is time wasting
docker container stop $(docker container ls –aq) && docker system prune –af ––volumes
The above command tells Docker to stop the containers listed in the parentheses.
Inside the parentheses, you tell Docker to generate a list of all the containers, and then the information is passed back to the container stop command and stops all the containers.
The && attribute tells Docker to remove all stopped containers and volumes.
–af means this should apply to all containers (a) without a required confirmation (f).
Docker cli command :
docker rm -f $(docker ps -qa)
or
docker system prune
Create an Alias to do so every time
vi .bash_profile
alias dockererase='docker rm -f $(docker ps -qa)'
source ~/.bash_profile
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I recently started migrating my self-hosted services to docker. To simplify maintenance, I'm using docker-compose. Some of the containers depend on each other, others are independent.
With more than 20 containers and almost 500 lines of code, maintainability has now decreased.
Are there good alternatives to keeping one huge docker-compose file?
That's a big docker-compose.yml! Break it up into more than one docker-compose file.
You can pass multiple docker-compose files into one docker-compose command. If the many different containers break up logically, one way of making them easier to work with is breaking apart the docker-compose.yml by container grouping, logical use case, or if you really wanted to you could do one docker-compose file per service (but you'd have 20 files).
You can then use a bash alias or a helper script to run docker-compose.
# as an alias
alias foo='docker-compose -f docker-compose.yml -f docker-compose-serviceA.yml -f docker-compose-serviceB-yml $#'
Then:
# simple docker-compose up with all docker-compose files
$ foo -d up
Using a bash helper file would be very similar, but you'd be able to keep the helper script updated as part of your codebase in a more straightforward way:
#!/bin/bash
docker-compose -f docker-compose.yml \
-f docker-compose-serviceA.yml \
-f docker-compose-serviceB.yml \
-f docker-compose-serviceC.yml \
$#
Note: The order of the -f <file> flags does matter, but if you aren't overriding any services it may not bite you. Something to keep in mind anyway.
You could look at kubernetes
If you didn't want to go all in, you can use minikube
or maybe kubernetes baked into docker on the edge channel for windows or mac but that is beta so perhaps not for a production system
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 years ago.
Improve this question
I deployed Jira in a Docker container.
docker run --detach --publish 8080:8080 cptactionhank/atlassian-jira-software:latest
I am accessing the files using:
docker exec -t -i containerid /bin/bash
But I am not able to see the files which are needed to edit.
Supposedly for creating a maintenance splash page.
Ref : https://confluence.atlassian.com/confkb/how-to-create-a-maintenance-splash-page-290751207.html
According to the documents that you sent and location of the installation directory that you mentioned, you need to edit /opt/atlassian/jira/conf/server.xml file to edit the context section. Then edit /opt/atlassian/jira/conf/web.xml file to adding new error page.
Please note that you have to access those files via bin/bash from docker:
sudo docker exec -i -t --user root containerid /bin/bash
Also this has a good information as well.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I have built a VG named cinder-volumes. Within this VG, I created a PV named leader-volume. Then I mounted this PV as the root filesystem of a KVM Ubuntu installation. During the installation process, I selected LVM partition.
At last, I created a snapshot for the PV leader-volume.
Now I want to read some files within my Ubuntu installation... What shall I do?
Take a look at kpartx - it's especially useful for managing VMs where entire file systems are often packed into single volumes.
kpartx can create device nodes for partitions nested on a block device or disk image.
Mount (one of the following):
kpartx -av your_vm_disk.img
kpartx -av /dev/mapper/your_device
Where your_device could be an LVM partition. The -v option causes kpartx to display the devices it creates for nested partitions.
Mount the appropriate /dev/mapper/loopXpX:
mount /dev/mapper/loop0p1 /mnt
Unmount (after unmounting loop devices):
umount -d /dev/mapper/loop0
umount -d diskimage.img
Remove the device mappings:
kpartx -dv your_vm_disk.img
kpartx -dv /dev/mapper/your_device