I've personal ASP.NET Core project which scrapes data from the web using Selenium and Chromium and saves it in local sqlite database.
I want to be able to run this app in Docker image on my Synology NAS. Managed to create and run Docker image (on my Mac), it displays data from sqlite db correctly, but getting error when trying to scrape:
The chromedriver file does not exist in the current directory or in a directory on the PATH environment variable.
From my very limited understanding of Dockers in general, I understand that I need to add chromiumdriver inside the docker somehow.
I've searched a lot, went trough ~30 different examples and still can't get this to work.
Any help is appreciated!
You need to build a new image based on the existing one, in which you add the chromedriver binary. In other words you need to extend your current image.
So create a directory containing a Dockerfile and the chromedriver binary.
Your Dockerfile should look like this:
FROM your_existing_image_name:version
COPY chromedriver desired_path_inside_container
Then open a terminal inside this directory and execute:
docker build -t your_existing_image_name:version++ .
After that you should be able to start a container from the newly created image.
Some notes:
I have assumed that your existing image has been tagged with a version. If it is not the case then remove :version from Dockerfile
Similarly, remove :version++ from the build command. However, is a good practice to include versioning in your images.
I have not add any entrypoint assuming that you do not need to change the existing one.
Related
I am new to docker, and I have created a docker image for sparq spatial reasoning toolbox using pull docker pull dwolter/sparq:latest, (Gethub: https://github.com/dwolter/SparQ).
The sparq catains set of calculus in form of lisp files, which can be used to do spatial reasoning, using the Sparq docker image in the windows docker.
The thing that I have developed my own calculi and I need to add it to the image.
I have tried to do that using the cp command but I could not. Because I don't know the path of the file indside the image, in otherwords, where I should place the file inside the image, also when place it in the main root of a container, and applied the command commit, it generated error: access denied by ther resource.
first question is :
Does the path in the image has the same path in the sparq application folder which I have already downloaded?
Also, How I can add this culculi (lisp file) to the image in docker ?
P.s. I have also downloaded the folder which contains the application (sparq and all its files and folders) and I have placed my calculi inside the appropriate folder ( caculi folder and it works fine).
I run it using Linux command line and it works fine, Now I need to use this application through the docker.
As I have the application on folder.
Can I create an image on my own based on the folder that contains the application ?
The Sparq Dockerfile indicates the working directory is set to /root/sparq. That means, you should be able to run the following copy command in your own Dockerfile to place your lisp file in the same place you have locally, the place where all other Calculi lisp files are located:
FROM dwolter/sparq
COPY ./path/to/my/Calculi/file.lisp ./Calculi
Then run docker build . to build a Docker image containing sparq and your file. Then, it should be ready to run.
NOTE: I am not familiar with lisp. If it needs to be compiled, the compile command will need to be added to the Dockerfile after the COPY.
I have a BDD framework in Java which I am planning to dockerize. I am able to build and run that image as a whole. But what I want is:
To build 2 images, Image-1: Entire project (without feature files) & Image-2: Feature files.
Reason to do this is: My feature file will change often. I don't want to create my image again every time to install JDK and maven when there is only a change in the feature file.
What I expect is - Image-1 runs always as a container in background and when there is a change in feature files, I build Image-2 and start it as a container. This should trigger test by using already running container which has an entire dependency.
Reason to do this is: My feature file will change often. I don't want to create my image again every time to install JDK and maven when there is only a change in the feature file.
If you just want to meet above requirement, what you is just image inherit like next:
base/Dockerfile:
FROM ubuntu:16.04
# install JDK/MAVEN here
RUN xxx
Build a base image now:
$ docker build -t mybase:1 .
Then, for your application, use this base image:
app/Dockerfile:
FROM mybase:1
# add new feature files here
ADD ... ...
Everytime, your feature file change, you could rebuild your app Dockerfile and run a container base on this new built out image. But, As the JDK/MAVEN is in another base image (mybase:1) which was already built there, so they won't be built again.
Visual Studio Code (1.22.2) offers a file extension named .dockerfile in the the save dialog. What is a file with this extension? A Dockerfile is in all documentation and examples, that I've seen so far, only called Dockerfile.
If I enter Dockerfile as a file name, a file named Dockerfile.dockerfile is created.
It appears that "*.dockerfile" is simply an alternative to the conventional "Dockerfile" name. This is perhaps useful if you want to keep a collection of dockerfiles in the same directory. Note the -f/--file option in docker help build:
-f, --file string Name of the Dockerfile (Default is 'PATH/Dockerfile')
In other words, you are not required to use the name "Dockerfile", and the VSCode extension will correctly syntax-highlight any file ending in ".dockerfile".
Dockerfile
A Dockerfile is a text document that contains all the commands a user could call on the command line to assemble an image. Using docker build users can create an automated build that executes several command-line instructions in succession. Docker images are the basis of containers. An Image is an ordered collection of root filesystem changes and the corresponding execution parameters for use within a container runtime. An image typically contains a union of layered filesystems stacked on top of each other. An image does not have state and it never changes.More on
Dockerfile extension
A Dockerfile has no extension . if your using docker on docker on windows use notepad ++ to create a dockerfile while saving select “All type “ and save the file name as “Dockerfile”.
Mongodb/Dockerfile
Using the .dockerfile extension tells VSCode that the file is a DockerFile for code highlighting and linting
What worked for me was to save the file in VS Code as a Dockerfile. But, you need to remove the .dockerfile extension that VS Code puts on it before running the $docker-compose up command:
Even though VSCode can deal with extensionless files just fine, some major parts of the Windows operating system can't. Try double clicking a Dockerfile (without extension) in the Windows Explorer. You will always be asked which program you want to open it in because Windows can't map extensionless files to a default program.
My guess is that because of this problem, Microsoft would like for all files to have an extension and uses VSCode to nudge people towards using a file extension for Dockerfiles, ignoring the fact that this contradicts the de facto standard.
Dockerfile doesn't have any extensions.
As you can see from hte documentation, https://docs.docker.com/compose/gettingstarted/, it doesn't have any extensions.
I'm creating some Windows Container images that I need but the source file I want to ADD are in a network share \\myserver\myshare\here.
I've tried in any possible way but I always get the message error The system cannot find the path specified.
Is it because I have not yet found the right way to set it or is it that it is just not possible?
From the Docker site:
Multiple resource may be specified but if they are files or directories then they must be relative to the source directory that is being built (the context of the build).
Is that why I can't accomplish what I need?
Full error message: GetFileAttributesEx \\myserver\myshare\here\: The system cannot find the path specified.
Whatever you ADD or COPY must be in the docker build context.
When you do this:
docker build .
That directory param (the . in the example) is the context that is copied and sent to the Docker daemon. Then the docker daemon use those files to COPY or ADD. It won't use any file that is not in that context.
That is the issue that you are experiencing. I'm not sure how you can solve it anything than copying the files from \\myserver to your build directory.
ADD is capable of download files by providing an URL (should investigate if it supports Windows' shares)
I'm building a Yocto image for a project but it's a long process. On my powerful dev machine it takes around 3 hours and can consume up to 100 GB of space.
The thing is that the final image is not "necessarily" the end goal; it's my application that runs on top of it that is important. As such, the yocto recipes don't change much, but my application does.
I would like to run continuous integration (CI) for my app and even continuous delivery (CD). But both are quite hard for now because of the size of the yocto build.
Since the build does not change much, I though of "caching" it in some way and use it for my application's CI/CD and I though of Docker. That would be quite interesting as I could maintain that image and share it with colleagues who need to work on the project and use it in CI/CD.
Could a custom Docker image be built for that kind of use?
Would it be possible to build such an image completely offline? I don't want to have to upload the 100GB and have to re-download it on build machines...
Thanks!
1. Yes.
I've used docker to build Yocto images for many different reasons, always with positive results.
2. Yes, with some work.
You want to take advantage of the fact that Yocto caches all the stuff you need to do your build in what it calls "Shared State Cache". This is normally located in your build directory under ${BUILDDIR}/sstate-cache, and it contains exactly what you are looking for in this case. There are a couple of options for how to get these files to your build machines.
Option 1 is using sstate mirrors:
This isn't completely offline, but lets you download a much smaller cache and build from that cache, rather than from source.
Here's what's in my local.conf file:
SSTATE_MIRRORS ?= "\
file://.* http://my.shared-computer.com/some-folder/PATH"
Don't forget the PATH at the end. That is required. The build system substitutes the correct path within the directory structure.
Option 2 lets you keep a local copy of your sstate-cache and build from that locally.
In your dockerfile, create the sstate-cache directory (location isn't important here, I like /opt for my purposes):
RUN mkdir -p /opt/yocto/sstate-cache
Then be sure to bindmount these directories when you run your build in order to preserve the contents, like this:
docker run ... -v /place/to/save/cache:/opt/yocto/sstate-cache
Edit the local.conf in your build directory so that it points at these folders:
SSTATE_DIR ?= "/opt/yocto/sstate-cache"
In this way, you can get your cache onto your build machines in whatever way is best for you (scp, nfs, sneakernet).
Hope this helps!