I have an sql file which should creates tables and their data, I also want to use that dump file in my docker-compose file. The best solution I could come up with was running the curl command to upload the dump file from an external url and then use it in my docker entrypoint. I also want to automate this process, is it possible to run the curl command in the pipeline and delete the dump file after running the containers?
Related
I'm using a local implementation of TinkerPop with docker image tinkerpop/gremlin-server:3.4.1 in order to locally interact with the graph database in nodeJs.
I need to set the IDManager to ANY so that it can accept string values for custom vertex IDs (Right now it's only working with numeric types).
I know that I need to set the configuration of TinkerGraph gremlin.tinkergraph.vertexIdManager, but I'm not sure how in my docker-compose file, I can have it initialize with the correct configurations.
http://tinkerpop.apache.org/docs/current/reference/#_configuration_4
Anyone know how to do this?
Thanks
When you launch the container using a command such as
docker run --rm -p 8182:8182 tinkerpop/gremlin-server
You can optionally pass in the path to a YAML configuration file. Which would look like this:
docker run --rm -p 8182:8182 tinkerpop/gremlin-server conf/gremlin-server.yaml
That file is located inside the container in the opt/gremlin-server/conf folder. One option is to docker exec into the running container and edit the YAML and properties files and then create a new image from the modified one. You could also use docker cp to replace those files. While this will work, the downside is that you will have to do this each time you pull a newer version of the Gremlin Server image.
What you can try instead is to mount a local file system volume as part the docker command containing a YAML file that points to your own properties file in which you can add the ID manager lines:
gremlin.tinkergraph.vertexIdManager=ANY
gremlin.tinkergraph.edgeIdManager=ANY
That docker command will be something like this:
docker run --rm -p 8182:8182 -v $(pwd):/opt/gremlin-server/conf tinkerpop/gremlin-server conf/myfile.yaml
However, this may not work as the Gremlin Server startup script runs a sed command that creates a modified version of the YAML file and that requires write permissions to your local disk (this can be worked around as explained below). As a side note, that is done to fix up issues with IP addresses. The file permissions and user permissions need to be such that that sed command is able to run.
To work around docker now needing to have the ability to edit files on your local disk (rather than in the container's own ephemeral storage), at least on Linux systems, you can try using the --user parameter as shown below.
docker run --rm -p 8182:8182 --user $(id -u):$(id -g) -v $(pwd):/opt/gremlin-server/conf tinkerpop/gremlin-server conf/myfile.yaml
Note that for this to work, any files that Gremlin Server expects to read from the conf folder as part of its bootstrap process will now need to exist on your local disk, as we have re-mapped where the conf folder is. The files read during startup include the log4j-server.properties file and any scripts and properties files referenced by your YAML file. You can copy these files from the container itself (via docker exec or docker cp) or the Apache TinkerPop Github repo.
My query is similar to this Execute Command on host during docker build but I need my container to be running when I execute the command.
Background - I'm trying to create a base image for the database part of an application, using the mysql:8.0 image. The installation instructions for the product require me to run a DDL script to create the database (Done, by copying .sql file to the entrypoint directory), but the second step involves running a java based application which reads various config files to insert the required data into the running database. I would like this second step to be captured in the dockerfile somehow so I can then build a new base image containing the tables and the initial data.
Things I've thought of:
Install java and copy the quite large config tool to the container
and EXEC the appropriate command, but I want to avoid installing
java into the database container and certainly the subsequent image
if I can.
I could run the config tool on the host manually and
connect to the running container but my understanding is that this
would only apply to the running container - I couldn't get this into
a new image? It needs to be done from the dockerfile for docker build
to work.
I suspect docker just isn't designed for this.
I am new to docker and containers. I have a container consisting of an MRI analysis software. Within this container are many other software the main software draws its commands from. I would like to run a single command from one of the softwares in this container using research data that is located on an external hard drive which is plugged into my local machine that is running docker.
I know there is a cp command for copying files (such as scripts) into containers and most other questions along these lines seem to recommend copying the files from your local machine into the container and then running the script (or whatever) from the container. In my case I need the container to access data from separate folders in a directory structure and copying over the entire directory is not feasible since it is quite large.
I honestly just want to know how I can run a single command inside the docker using inputs present on my local machine. I have run docker ps to get the CONTAINER_ID which is d8dbcf705ee7. Having looked into executing commands inside containers I tried the following command:
docker exec d8dbcf705ee7 /bin/bash -c "mcflirt -in /Volumes/DISS/FMRIPREP/sub-S06V1A/func/sub-S06V1A_task-compound_run-01_bold.nii -out sub-S06V1A_task-compound_run-01_bold_mcf_COMMAND_TEST.nii.gz -reffile /Volumes/DISS/FMRIPREP_TMP/sub-S06V1A_dof6_ver1.2.5/fmriprep_wf/single_subject_S06V1A_wf/func_preproc_task_compound_run_01_wf/bold_reference_wf/gen_ref/ref_image.nii.gz -mats -plots"
mcflirt is the command I want to run inside the container. I believe the exec command would do what I hope since if I run docker exec d8dbcf705ee7 /bin/bash -c "mcflirt" I will get help output for the mcflirt command which is the expected outcome in that case. The files inside of the /Volume/... paths are the files on my local machine I would like to access. I understand that the location of the files is the problem since I cannot tab complete the paths within this command; when I run this I get the following output:
Image Exception : #22 :: ERROR: Could not open image /Volumes/DISS/FMRIPREP/sub-S06V1A/func/sub-S06V1A_task-compound_run-01_bold
terminate called after throwing an instance of 'RBD_COMMON::BaseException'
Can anyone point me in the right direction?
So if I got you right, you need to execute some shell script and provide the context (like local files).
The way is straightforward.
Lets say your script and all needed files are located in /hello folder of your host PC (no matter really if they are stored together or not, just showing the technique).
/hello
- runme.sh
- datafile1
- datafile1
You mount this folder into your container to make the files accessible inside. If you dont need container to modify them, better mount in readonly mode.
You launch docker like this:
docker run -it -v /hello:/hello2:ro ubuntu /hello2/runme.sh
And that's it! Your script runme.sh gets executed inside container and it has access to nearby files. Thanks to -v /hello:/hello2:ro directive. It maps host's folder /hello into container's folder /hello2 in readonly ro mode.
Note you can have same names, I've just differed them to show the difference.
I'm trying to run specifics commands that would be automatically fired on docker-compose up
I want to avoid all those steps : https://github.com/FLKone/Dodee/tree/php_mysql_slim
(downloading a zip containing the docker-compose.yml + some required default file)
In that example I need a default config file for Nginx.
So now the solution is to download the zip containing both the yml and the config file. But it would be better if the config file was downloaded when the user run docker-compose up (or created by it, to limit network access)
(Maybe the best practice here is to create an installion script to download both the yml and the config file ?)
Thanks
I'm trying to run specifics commands that would be automatically fired on docker-compose up
Use entrypoint in your docker-compose.yml. You can do this per service, so the web container can download/configure nginx conf, the php container can run composer, etc.
I'm not sure that I'm trying to do it the right way, but I would like to use docker.io as a way to package some programs that need to be run from the host.
However these applications take filenames as arguments and need to have at least read access. Some other applications generate files as output and the user expects to retrieve those files.
What is the docker way of dealing with files as program parameters?
Start Docker with a mounted volume and use this to directory to manipulate files.
See: https://docs.docker.com/engine/tutorials/dockervolumes/
If you have apps that require args when they're run then you can just inject your parameters as environment variables when you run your docker container
e.g.
docker run -e ENV_TO_INJECT=my_value .....
Then in your entrypoint (or cmd) make sure you just run a shell script
e.g. (in Dockerfile)
CMD["/my/path/to/run.sh"]
Then in your run.sh file that gets run at container launch you can just access the environment variables
e.g.
./runmything.sh $ENV_TO_INJECT
Would that work for you?