If want to get files in ECS' /tmp path, is it necessary to set a volume item to map the path?
https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ecs_task_definition#volume
Or is there a way to run something like docker exec ... to see the container?
I am not sure I understand the question fully. You can "ecs exec" into a task (if that's what you want/need to do). Here is the doc page on how to do that and this is a longer blog post that dives into it.
If you instead need to pre-populate files in /tmp you have a couple of options. Either you pull them at container startup as part of a startup script. Or you mount the /tmp directory to an external share that hosts the data. Here is how.
Related
So my question is whether it is possible to have a volume like:
"${my_conf_file}:-raw.my/GitHub/file.git":/conf.json
So this would be my goal, however I do not find anything related to this. In the end if the user has a file, the file should be passed, otherwise either conf.json should not be replaced by anything (because the GitHub file is already there, to be replaced by a conf file that a user might have) or the file from GitHub should be passed again.
If it best to figure out the first part ("${my_conf_file}:-raw.my/GitHub/file.git") ahead of the docker run.
In your start script (which calls docker run or uses your docker-compose.yml), add a script able to determine which config file you want (the user's, conf.json itself or the one from GitHub)
Once you can script that, then you can add your docker run -v call, which will mount the right file to :/conf.json in the container.
I'm trying to setup a singularity container for an image processing application, and I need it to be able to save images to a specified directory. I had originally tried using a straight -B flag, but that seems to mount a directory as read only if the container wasn't being run as root. Is there a way to either make a bind r/w for any user, or would I need to use some sort of scratch directory or fusemount?
The write permissions for the bound directory match those on the host system. If you want anyone to be able to write to a given directory, set permissions on the host with chmod 777 dir_name. Keep in mind this will allow anyone to read, write and delete files in the directory. Consider adding users to a shared group and using group permissions (chmod g+rwX dir_name) if there are people using the server who should not have access.
If the directory has the right permissions but you still can't write to it when it's bound, you may want to use singularity --debug exec ... to see that everything is being correctly bound to the container.
First off, I really lack a lot of knowledge regarding Docker itself and its structure. I know that it'd be way more beneficial to learn the basics first, but I do require this to work in order to move on to other things for now.
So within a Dockerfile I installed wget & used it to download a file from a website, authentification & download are successful. However, when I later try move said file it can't be found, and it doesn't show up using e.g explorer either (path was specified)
I thought it might have something to do with RUN & how it executes the wget command; I read that the Id can be used to copy it to harddrive, but how'd I do that within a Dockerfile?
RUN wget -P ./path/to/somewhere http://something.com/file.txt --username xyz --password bluh
ADD ./path/to/somewhere/file.txt /mainDirectory
Download is shown & log-in is successful, but as I mentioned I am having trouble using that file later on as it's not to be located on the harddrive. Probably a basic error, but I'd really appreciate some input that might lead to a solution.
Obviously the error is produced when trying to execute ADD as there is no file to move. I am trying to find a way to mount a volume in order to store it, but so far in vain.
Edit:
Though the question is similiar to the "move to harddrive" one, I am searching for ways to get the id of the container created within the Dockerfile in order to move it; while the thread provides such answers, I haven't had any luck using them within the Dockerfile itself.
Short answer is that it's not possible.
The Dockerfile builds an image, which you can run as a short-lived container. During the build, you don't have (write) access to the host and its file system. Which kinda makes sense, since you want to build an immutable image from which to run ephemeral containers.
What you can do is run a container, and mount a path from your host as a volume into the container. This is the only way how you can share files between the host and a container.
Here is an example how you could do this with the sherylynn/wget image:
docker run -v /path/on/host:/path/in/container sherylynn/wget wget -O /path/in/container/file http://my.url.com
The -v HOST:CONTAINER parameter allows you to specify a path on the host that is mounted inside the container at a specified location.
For wget, I would prefer -O over -P when downloading a single file, since it makes it really explicit where your download ends up. When you point -O to the location of the volume, the downloaded file ends up on the host system (in the folder you mounted).
Since I have no idea what your image or your environment looks like, you might need to tweak one or two things to work well with your own image. As a general recommendation: For basic commands like wget or curl, you can find pre-made images on Docker Hub. This can be quite useful when you need to set up a Continuous Integration pipeline or so, where you want to use wget or curl but can't execute it directly.
Use wget -O instead of -P for specific file download
for e.g.,
RUN wget -O /tmp/new_file.txt http://something.com --username xyz --password bluh/new_file.txt
Thanks
I'm having an issue with docker-compose where I'm passing a file into the container when it's run. The issue is that it doesn't seem to recognize when the file has been changed and serves the saved result back indefinitely until I change the name of the file.
An example (modified names for brevity):
jono#macbook:~/myProj% docker-compose run vpn conf.opvn
Options error: Unrecognized option or missing parameter(s) in conf.opvn:71: AXswRE+
5aN64mYiPSatOACC6+bISv8RcDPX/lMYdLwe8zQY6qWtbrjFXrp2 (2.3.8)
Then I change the file, save it, and run the command again - exact same output.
Then without changing anything I do this:
jono#macbook:~/myProj% cp conf.opvn newconf.opvn
And when I run $ docker-compose run vpn newconf.opvn it works. Seems really silly.
I'm working with Tmux and Mac if there is some way that affects it. Is this the expected behaviour? I couldn't find anything documenting this on the docker-compose homepage.
EDIT:
Specifically I'm using this repo from the amazing Jess.
The image you are using is using volume in order to mount your current directory. Basically the file conf.opvn is copied to the docker container.
When you change the file, the container doesn't see that change, but it does pick up the rename (which the container sees as a new file). This most probably is due to user rights of the file and the user rights of the folder in the docker container where this file is mounted. Try changing the file's permissions to 777 before beginning the process and check again.
You can find a discussion about this in the official forum of docker
I am trying to use dokku-persistent-storage so my uploads for my rails app stay on the server, but I don't quite understand how to build the path since I am new to Dokku and Docker.
(I am running this on an Ubuntu droplet on Digital Ocean)
I'm not sure if it should be something like this:
[SERVER IP ADDRESS]/home/dokku/myapp/public_folder
or
/home/dokku/myapp/public_folder
or if i'm way off and it should be something completely different.
This is what the github section says about it:
In your applications folder (/home/dokku/app_name) create a file called PERSISTENT_STORAGE.
Inside this file list one volume-map/volume per line to mount. For example:
/host/path:/container/path
/another/container/path
The above example will result in the following arguments being passed to docker during deploy and docker run:
-v /host/path:/container/path -v /another/container/path
Move information on docker volumes can be found here: http://docs.docker.io/en/latest/use/working_with_volumes/
I am not into Ruby or dokku, but if I understood correctly, you want your docker to have a persistent storage on the host machine.
PERSISTENT_STORAGE file, as to the documentation that you've quoted, contains mappings from host file-system directories to your container file-system directories (translated to -v arguments of the CLI).
Therefore, you should map the directory of your uploads in the container, to the desired directory on the host.
For example, if your app's uploads are saved to this dir (inside the docker container):
/home/dokku/myapp/public_folder
and you'd like them to be kept in your host at:
/home/some/dir
then, as I understand, the content of PERSISTENT_STORAGE file should be:
/home/some/dir:/home/dokku/myapp/public_folder
I hope I got you right.
Use Dokku's storage:mount option.
You'll need to SSH into your dokku host:
ssh dokku#host
Run the following command to link the storage directory for that app to the app/public/uploads folder, for example:
storage:mount <app> /var/lib/dokku/data/storage:/app/public/uploads
The Dokku docs cover this well at: at http://dokku.viewdocs.io/dokku/advanced-usage/persistent-storage/