Download Gitlab file in Docker container - docker

I want to download a .h5 file into my container and load it into some code, I tried using curl/wget Git API in the Dockerfile to download it, but when I check the files in the container its not there.
curl --header 'PRIVATE-TOKEN:XXXXX' 'https://gitlab.com/api/v4/project/100/repository/files/mobilenet.h5?ref=master'
when running the code directly in the container shell I get {"error":"404 Not Found"} .
Any Idea on how I could solve this ?

Related

Is it possible to run the curl command in bitbucket pipeline?

I have an sql file which should creates tables and their data, I also want to use that dump file in my docker-compose file. The best solution I could come up with was running the curl command to upload the dump file from an external url and then use it in my docker entrypoint. I also want to automate this process, is it possible to run the curl command in the pipeline and delete the dump file after running the containers?

How to put the code to docker image while building

I want to clone the code to docker image while building it
I am thinking to pass the ssh keys while git clone, which is not working. below is the command i am using, showing permission denied
ssh-agent bash -c 'ssh-add /home/username/.ssh/id_rsa.pub my keys; git clone ssh://git#location.git'
I can't use the cloning using https
ALSO say if the code is cloned on image, CAN WE GIT PULL WHILE RUNNING IT ON CONTAINER
So there are two real paradigms here:
I am working on my local machine.
In this scenario, you more than likely already have the code checked out onto your local machine. Here, just use the COPY directive to take the entire folder and put it somewhere into the container. No need to worry about git or anything of the sort.
I am having a build server perform the build
In this scenario, it makes sense to let the build server check the code out and then perform the same action as above. We just copy the checked out code into the image
Lastly, another alternative that works for dynamic languages like PHP, JS etc, is to NOT put the code into the image, but MOUNT the code onto the container at runtime.
Let's take PHP for example. If the webserver is looking in /var/www/html for the code, you can run your image like this:
docker run -d --name {containername} -p 80:80 -p 443:443 -v /my/dir/where/code/is:/var/www/html {your base image}
The above will create the image, but will pass your local directory through to the /var/www/html directory, meaning any changes you make locally would appear in the source code for the container. This was much more prominently used back with Vagrant and the early days of docker before composer was stable.
I Think the way to do is
in your build machine
git clone <repo>
git archive --format=tar.gz <commit_hash/branch> --output=code.tar.gz
docker build
in the Dockerfile you'll have to add
ADD code.tar.gz <directory>
This will make sure that you're not adding any .git stuff into your container and it'll be small in size as possible.

Load a container

Hello I am new to docker. I have installed my base file which is Wordpress on My PC. Since i use multiple systems i copied my current commits from pc to opensuse. Now i want to load my committed file on to opensuse. Is there any possible way to do it. I tried doing run and i cannot see any changes???
Base file : Wordpress
docker run -dtip 192.168.56.10:80:80 -p 192.168.56.10:2222:22 -h baseWordpress --name baseWordpress--restart unless-stopped mine/wordpress /usr/bin/supervisor
docker start baseWordpress
Commit files
what should i use for commit file to run and start on opensuse
You could try to have your own Dockerfile, where mine/wordpress is the based image, and you overwrite files with your commit files.
Or you could try to map a volume (-v option) in docker run command, and then copy your commit files to the mapped host folder. Next attach to the container, and move the files to correct location.

Why do the changes I make in my working directory not show up in my Docker container?

I would like to run a test a parse-dashboard via Docker, as documented in the readme.
I am getting the error message, "Parse Dashboard can only be remotely accessed via HTTPS." Normally, you can bypass this by adding the line "allowInsecureHTTP": true in your parse-dashboard-config.json file. But even if I have added this option to my config file, the same message is displayed.
I tried to edit the config file in the Docker container, whereupon I discovered that none of my local file changes where present in the container. It appeared as though my project was an unmodified version of the code from the github repository.
Why do the changes that I make to the files in my working directory on the host machine not show up in the Docker container?
But what it is upload to my docker, it's in fact the config file of my master branch.
It depends:
what that "docker" is: the official DockerHub or a private docker registry?
how it is uploaded: do you build an image and then use docker push, or do you simply do a git push back to your GitHub repo?
Basically, if you want to see the right files in your Docker container that you run, you must be sure to run an image you have built (docker build) after a Dockerfile which COPY files from your current workspace.
If you do a docker build from a folder where your Git repo is checked out at the right branch, you will get an image with the right files.
The Dockerfile from the parse-dashboard repository you linked uses ADD . /src. This is a bad practice (because of the problems you're running into). Here are two different approaches you could take to work around it:
Rebuild the Image Each Time
Any time you change anything in the working directory (which the Dockerfile ADDs to /src), you need to rebuild for the change to take effect. The exception to this is src/Parse-Dashbaord/parse-dashboard-config.json, which we'll mount in with a volume. The workflow would be nearly identical to the one in the readme:
$ docker build -t parse-dashboard .
$ docker run -d -p 8080:4040 -v ./src/Parse-Dashbaord/parse-dashboard-config.json:/src/Parse-Dashboard/parse-dashboard-config.json parse-dashboard
Use a Volume
If we're going to use a volume to do this, we don't even need the custom Dockerfile shipped with the project. We'll just use the official Node image, upon which the Dockerfile is based.
In this case, Docker will not run the build process for you, so you should do it yourself on the host machine before starting Docker:
$ npm install
$ npm run build
Now, we can start the generic Node Docker image, and ask it do serve our project directory.
$ docker run -d -p 8080:4040 -v ./:/src node:4.7.2 "cd /src && npm run dashboard"
Changes will take effect immediately because you mount ./ into the image as a volume. Because it's not done with ADD, you don't need to rebuild the image each time. We can use the generic node image because if we're not ADDing a directory and running the build commands, there's nothing our image will do differently than the official one.

copy a file to a volume of a running docker container using the remote API

How do you copy a file to a volume of a running docker container using the remote API?
I know about docker cp (https://docs.docker.com/engine/reference/commandline/cp/) but I would like to do this using the remote API.
I would like to do the equivalent of
docker cp path_to_local_file container:location_in_volume
Except I want to POST the file using the remote API.
I can't find anything about it in the remote API docs (https://docs.docker.com/engine/reference/api/docker_remote_api_v1.24/).
Is it possible?
Use PUT archive:
PUT /containers/{container name or id}/archive?path={path in container} HTTP/1.1
Content-Type: application/x-tar
{{ TAR STREAM }}
The body of the request should be a tar file.

Resources