I have create a docker container with this command:
docker run -d -p 20001:80 -v /home/me/folder1/:/usr/local/apache2/htdocs/ httpd:2.4
This container contains scripts which create files and directories in /usr/local/apache2/htdocs/ folder.
I can see this files on host computer in /home/me/folder1/ folder.
I have tried to open one of this files because i want to write something.
I cannot do that because i do not have write permission on this files. This is because they are owned by root user.
What can i do in order to make this files writable be "me" user ? I want to do that automaticaly
Thanks a lot
you have to do
sudo chmod +x nameofscript.sh
whit this command execute by master you set this scripts executable for all users
Related
I want to run a specific docker-compose file without entering the sudo password and without assigning that user who runs the command to the docker group for security reasons.
I thought about using the NOPASSWD inside sudoers file and run a bash script called "bash-dockercompose-up.sh" that simply runs docker-compose up -d.
However, it needs the sudo command before the docker-compose up -d to connect to docker host.
This is my /etc/sudoers file:
exampleuser ALL=(root) NOPASSWD:/usr/bin/bash-dockercompose-up.sh
Ok I was able to run it by using the python official sdk library.
https://docs.docker.com/engine/api/sdk/
I created a python script called "service-up.py"
service-up.py
import docker
client = docker.from_env()
container = client.containers.get('id or name here')
container.start()
Then compile it into a binary file in order to change it's uid permissions so a non root user can run it:
pyinstaller service-up.py
move into the dist folder where file is located and run:
chown root:root service-up
chmod 4755 service-up
How do I create an empty folder inside a docker container using a Dockerfile?
I guess I could just copy an empty folder from source as an empty "backup" directory to the container like:
COPY empty_dir backup
... but what I would like to do, is just to create the folder without referring to anything existing.
A script running in the container would need to access this folder later on to copy some backup-files into it.
Is there a command like MKDIR to be used in the Dockerfile?
Simple question, but couldn't find answer to it (easily at least).
You could create it with :
RUN mkdir -p /path/to/my/myemptydir
-p allows to make intermediate directories.
I'm a new leaner of docker.I came a cross a problem while I'm trying to make my own docker image.
Here's the thing.I create a new DockerFile to build my own mysql image in which I declared MYSQL_ROOT_PASSWORD and put some init scripts in the container.
Here is my Docker
FROM mysql:5.7
MAINTAINER CarbonFace<553127022#qq.com>
ENV TZ Asia/Shanghai
ENV MYSQL_ROOT_PASSWORD Carbon#mysqlRoot7
ENV INIT_DATA_DIR /initData/sql
ENV INIT_SQL_FILE_0 privileges.sql
ENV INIT_SQL_FILE_1 carbon_user_sql.sql
ENV INIT_SQL_FILE_2 carbonface_sql.sql
COPY ./my.cnf /etc/mysql/donf.d/
RUN mkdir -p $INIT_DATA_DIR
COPY ./sqlscript/$INIT_SQL_FILE_0 $INIT_DATA_DIR/
COPY ./sqlscript/$INIT_SQL_FILE_1 $INIT_DATA_DIR/
COPY ./sqlscript/$INIT_SQL_FILE_2 $INIT_DATA_DIR/
COPY ./sqlscript/$INIT_SQL_FILE_0 /docker-entrypoint-initdb.d/
COPY ./sqlscript/$INIT_SQL_FILE_1 /docker-entrypoint-initdb.d/
COPY ./sqlscript/$INIT_SQL_FILE_2 /docker-entrypoint-initdb.d/
CMD ["mysqld"]
I'm trying to build a docker image which contains my own config file and when mounted it would be showed in the local directory and can be modified.
I'm really confused that when I start my container with this image like the official description and also here is my commands:
docker run -dp 3306:3306 \
-v /usr/local/mysql/data:/var/lib/mysql \
-v/usr/local/mysql/conf:/etc/mysql/conf.d \
--name mysql mysql:<my builded tag>
You know I'm trying to mounted the
/usr/local/mysql/conf to the /etc/mysql/conf.d in the container which is been told as the custom config file mounted location.
And I supposed that my custom config file my.cnf which has been copied into the image during docker build and would be show in my local direcroty /usr/local/mysql/conf
And since I already copied my custom config file into image which you can see in my DockerFile.
But it turns out that the directory is empty and the /etc/mysql/conf.d is also overwrite by local directory.
Before I run my container, both /usr/local/mysql/conf and /usr/local/mysql/data is empty at all.
OK fine, I've been told that the volume mounted directory would overwrite the file inside the container.
But how could the empty data directory shows the data files inside the container but the empty conf directory overwrite the conf.d directory in the container.
It make no sense.
I was very confused and I would be very appreciate it if someone can explain why it happens.
My OS is MacOS Big Sur and I used the latest docker.
A host-directory bind mount, -v /host/path:/container/path, always hides the contents of the image and replaces it with the host directory. If the host directory is empty, the container directory will be the same empty directory at container startup time.
The Docker Hub mysql container has an involved entrypoint script that checks to see if the data directory is empty, and if so, initializes the database; abstracted out
#!/bin/sh
# (actually in hundreds of lines of shell code, with more options)
if [ ! -d /var/lib/mysql/data/mysql ]; then
mysql_install_db
# (...and start a temporary database server and run the
# /docker-entrypoint-initdb.d scripts)
fi
# then run the main container command
exec "$#"
Simply the presence of a volume doesn't cause files to be copied (with one exception at one specific point in the lifecycle for named volumes), so if you need to copy content from a container to the host you either need to do it manually with docker cp or have a way in the container code to do it.
I am creating a docker container that will run a minecraft server. (Yes i know, these already exist). And of course i want the world to be saved when the container is turned off.
This is my dockerfile:
FROM anapsix/alpine-java
COPY ./ /home
CMD ["java","-jar","/home/main.jar"]
EXPOSE 25565
Then i build the container:
docker build -t minecraftdev .
Run the container:
docker run -dp 25565:25565 -v C:/Users/user/server:/home minecraftdev
And then the files in the image, server.properies, the server jar file and EULA.txt is wiped.
Is there another way i don't now of to get the container to store data? And this is without placing the files in the server folder.
Thank you for your answers, i was able to fix it by -v C:/Users/user/server/world:/home/world As the world files are stored in that folder, Instead of changing out all the files in the folder as i didn't know -v did.
Minecraft makes the server.jar file and i don't know how to change so it stores all the files in another place.
When creating a container using docker run, is there a way to automatically copy files from a docker volume to the host directory it is mounted on?
When running
docker run -d -v /localpath:containerpath image
the files found in containerpath are not copied to my /localpath directory.
Is there a way to achieve this? The image contains a directory that needs to be accessible on the host machine for local development.
What i did not know was that when adding a file to a volume from the shell of the container, it is also created in the host directory. So after a lot of debugging and testing things i have managed to achieve what i wanted
For clarification if anyone needs this in the future: to automatically generate a docker container that clones a git repo and expose the host public_html directory to a volume with the files already copied to the host ready for editing
# Create Volume for the directory
VOLUME /var/www/html
COPY scripts/start.sh /start.sh
RUN chmod -v +x /start.sh
CMD ["/start.sh"]
start.sh contains the code to clone the repo if the directory is empty
#!/bin/bash
if [ "$(ls -A /var/www/html)" ]; then
echo "Directory already cloned"
else
echo "Repo files do not exist" ;
git clone ...
fi
Thanks for the help