In Hyper-ledger fabric, in which directory is the genesis block and ledger stored and how can I move them from docker to my own Hard disk?
you could check the Environments of a running orderer container.
e.g -
docker exec -it <orderere container name> sh
env
There you could see the ORDERER_GENERAL_GENESISFILE environment variable pointing to genesis file inside that orderer container.
Now you could copy that file using docker copy command :
e.g:
docker cp orderer1.example.com:/var/hyperledger/orderer/orderer.genesis.block ./
This answer is based on Hyperledger Fabric version: 1.4.9
Related
I'm trying to add an init.sh script to the docker-entrypoint-initdb.d so I can finish provisioning my DB in a docker container. The script is in a scripts directory in my local directory where the Dockerfile lives. The Dockerfile is simply:
FROM glats/alpine-lamp
ENV MYSQL_ROOT_PASSWORD=password
The build command works and completes with no errors, and then when I try to run the container it also runs fine, with the linked volume with the init script:
docker run -d --name mydocker -p 8080:80 -it mydocker \
-v ~/Docker/scripts:/docker-entrypoint-initdb.d
However when I log into the running container, I don't see any docker-entrypoint-initdb.d directory, and obviously the init.sh never runs:
/ # ls /
bin etc media proc sbin tmp
dev home mnt root srv usr
entry.sh lib opt run sys var
Does anyone know why the volume isn't getting mounted?
There is no such logic defined in the Docker image that you are using, the entrypoint of the above image just starts MySQL and httpd and does not any ability to construct Database from entrypoint.
If you want to have the ability to run init script using mysql-entrypoint and construct the database you need to use offical image.
Initializing a fresh instance
When a container is started for the first time, a new database with
the specified name will be created and initialized with the provided
configuration variables. Furthermore, it will execute files with
extensions .sh, .sql and .sql.gz that are found in
/docker-entrypoint-initdb.d.
Also running container, better to use one process per container. you can look into this docker-compose file that runs the stack as per rule of container "one process per container"
I am running Airflow on Docker using pucker/docker-airflow image
docker run -d -p 8080:8080 puckel/docker-airflow webserver
How do I make pySpark available?
My goal is to be able to use Spark within my DAG tasks.
Any tip?
Create a requirements.txt, add all the dependencies in this file and then follow: https://github.com/puckel/docker-airflow#install-custom-python-package
- Create a file "requirements.txt" with the desired python modules
- Mount this file as a volume -v $(pwd)/requirements.txt:/requirements.txt (or add it as a volume in docker-compose file)
- The entrypoint.sh script execute the pip install command (with --user option)
I want to encrypt my Kubernetes file to integrate it with Travis CI and for that, I am installing Travis CI CLI via docker container. When the container runs and I mount my current working directory to /app It just creates an empty folder.
I have added the folder in shared folders as well in the Virtual Box but nothing seems to work. I am using Docker Toolbox on Windows 10 home.
docker run -it -v ${pwd}:/app ruby:2.3 sh
It creates the empty app folder along with the other folders in the container but does not mount the volumes.
I also tried using
docker run -it -v //c/complex:/app ruby:2.3 sh
as someone suggested to use the name you specify in the Virtual Box.
Docker run -it -v full path of current directory:/app ruby:2
I have to start each instance of my application as a separate Docker service. The base image is the same but the configuration file is different for each instance. Now, the problem is my application makes some changes to the configuration file. And I want the configuration changes to persist so that when my application restarts (as docker service), it uses the updated configuration.
I am able to use the config file as a mount point using docker config. But the problem is no matter what mode (rwx) I give, I am not able to update the config file from inside the container. The mounted config is always Read-only file system.
1. How do I make the changes to the config file from docker container?
2. How do I make the updated config file persist outside the container, so that on service restart, the updated configuration is used?
I did the following to decouple config file from the image/container:
docker config create my-config config.txt
docker service create \
--name redis \
--config src=my-config,target=/config.txt,mode=0660 \
redis:alpine
docker container exec -ti <containerId> /bin/sh
The config file is mounted at /config.txt but I am not able to edit it.
The config will be read only by design. But you can copy this to another file inside your container as part of an entrypoint script defined in your image.
docker config create my-config config.txt
docker service create \
--name redis \
--config src=my-config,target=/config.orig,mode=0660 \
username/redis:custom
The entrypoint script would include the following:
if [ ! -f /config.txt -a -f /config.orig ];
cp /config.orig /config.txt
fi
# skipping the typical exec command here since redis has its own entrypoint
# exec "$#" # run the CMD as pid 1
exec docker-entrypoint.sh "$#"
Your Dockerfile to build that image would look like:
FROM redis:alpine
COPY /entrypoint.sh /
ENTRYPOINT [ "/entrypoint.sh" ]
And you'd build that with:
docker build -t username/redis:custom .
Docker swarm configs are read-only, not only from inside the container but also from the outside. to update the config of your service you must create a new config as explained in the docker swarm config docs
How do I update the config of my service?
You need to to copy the config, edit it, save it with new name, and then update the service
# Get the config from docker to file
docker config inspect --pretty my-config | tail -n +6 > conf-file
# Edit conf-file as needed here
...
# Save it with new name
docker config create my-config-v2 conf-file
# Update the service
docker service update \
--config-rm my-config \
--config-add source=my-config-v2,target=/config.txt \
redis:alpine
How do I update the config from inside the container?
For this you'll need to have access to docker from inside the container. you can do so by mounting the docker executable and the docker sock to the container:
docker run -it -v /var/run/docker.sock:/var/run/docker.sock \
-v /usr/bin/docker:/usr/bin/docker \
ubuntu bash
I can't copy the file from the host into the container using the Dockerfile, because i'm simply not allowed to, as mentioned in Docker Documentation:
The path must be inside the context of the build; you cannot
COPY ../something /something, because the first step of a docker build
is to send the context directory (and subdirectories) to the docker
daemon.
I'm also unable to do so from inside jenkins job, because the job commands run inside the shell of the docker container, there is not way to talk to the parent(which is the jenkins host).
This jenkins plugin could have been a life saver, but as mentioned in the first section: distribution of this plugin has been suspended due to unresolved security vulnerabilities.
This is how I copy files from host to docker image using Dockerfile
I have a folder called tomcat
Inside that, I have a tar file and Dockerfile
Commands to do the whole process just for understanding
$ pwd
/home/user/Documents/dockerfiles/tomcat/
$ ls
apache-tomcat-7.0.84.tar.gz Dockerfile
Sample Docker file:
FROM ubuntu_docker
COPY apache-tomcat-7.0.84.tar.gz /home/test/
...
Docker commands:
$ docker build -it testserver .
$ docker run -itd --name test1 testserver
$ docker exec -it bash
Now you are inside docker container
# ls
apache-tomcat-7.0.84.tar.gz
As you can see I am able to copy apache-tomcat-7.0.84.tar.gz from host to Docker container.
Notice the Docker Documentation first line which you have shared
The path must be inside the context of the build;
So as long as the path is reachable during build you can copy.
Another way of doing this would be using volume
docker run -itd -v $(pwd)/somefolder:/home/test --name test1 testserver
Notice -v parameter
You are telling Docker to mount Current_Directory/somefolder to Docker's path at /home/test
Once the container is up and running you can simply copy any file to $(pwd)/somefolder and it will get copied
inside container at /home/test