We have build docker image locally and ran it and it works perfect.
If same was done through azure container registry, we are getting the file/module not found issue. So either files are not copied properly or the files path and importing path in code is different.
When we inspected both the images, the image size and especially copy . . command file changes size is different.
I have used vscode to build the docker image in azure container registry.
1st image reflects docker image built in local system, 2nd image reflects the docker image which is pulled from azure container registry
Any help to resolve this issue would be great.
Local system: Mac and node version 16
Azure container registry: Linux and node 16
Error: Cannot find module "./src/routes/public"
Require stack:
/app/app.js
/app/server.js
at Function.Module._resolveFilename (node:internal/modules/cjs/loader:985:15)
at Function.Module._load (node:internal/modules/cjs/loader:833:27)
at Module.require (node:internal/modules/cjs/loader:1057:19)
at require (node:internal/modules/cjs/helpers:103:18)
at Object.<anonymous> (/app/app.js:7:1)
at Module._compile (node:internal/modules/cjs/loader:1155:14)
at Module._compile (/app/node_modules/pirates/lib/index.js:99:24)
at Module._extensions..js (node:internal/modules/cjs/loader:1209:10)
at Object.newLoader [as .js] (/app/node_modules/pirates/lib/index.js:104:7)
at Module.load (node:internal/modules/cjs/loader:1033:32) {
code: 'MODULE_NOT_FOUND',
requireStack: [ "/app/app.js", "/app/server.js" ]
}
Local docker image
docker image built by azure container registry
Trid to find root cause for this, Compared both the images. Both images have different sizes and the copy command might not be copying the files as expected.
Tried different ways, but I always got the same issue
Related
I've been facing an issue with which I need some help.
We're working on a project that is dockerized and runs on a docker-compose stack. Our code is mounted in the container to allow live-reload of changes. But this project uses a proprietary library that may require changes. The goal would be to have this library mounted in the container as well since the only way currently is to install it with poetry add ../ArcheologCommon and run everything locally (without docker). This is a real pain since we have many configurations and 7+ services to run
The first solution I've found was to mount my local venv into the container but that doesn't work for multiple reasons:
Mounting the entire venv results in hardcoded absolute paths in the env/bin/activate folder that is usable only by the host.
Mount only the venv/lib folder to avoid this hardcoded path issue.
This works very well until I run the poetry add ../ArcheologCommon, which gives me a ModuleNotFoundError: No module named 'archeolog_common' exception in my container.
Looking around in what poetry did in my venv, I find that my library became an *.egg-link file containing the absolute path of my library on my laptop
~/ ➤ docker exec -ti backend bash
www-data#467190e2f634:~/backend$ cat .venv/lib/python3.8/site-packages/archeolog-common.egg-link
/home/path/to/ArcheologCommon
The only workaround I found is to mount my local copy of the code directly into the site-packages of the venv, in place of the code already present there:
services:
backend:
command: server.py --bind 0.0.0.0 --port 9000 --reload-dir /var/www/backend --reload-dir /opt/venv/lib/python3.8/site-packages/archeolog_common
# [ ...]
volumes:
- ...
- ../../ArcheologCommon/archeolog_common:/opt/venv/lib/python3.8/site-packages/archeolog_common:ro
Which results in changes properly detected and working just fine.
WARNING: StatReload detected file change in '/opt/venv/lib/python3.8/site-packages/archeolog_common/models/graph.py'. Reloading...
INFO: Shutting down
INFO: Finished server process [156]
INFO: Started server process [191]
I think this is a terrible solution (which works, so I still give it credits) but I wonder if there are any better solutions?
PS: I've found similar issues on SO about this, but none of them were about using everything in Docker containers, which is my main requirement since it's a huge hassle to work outside of our docker-compose stack.
Thanks for reading me!
I've been trying to build a large windows docker image for days now but I can't find an answer anywhere that addresses this issue.
I'm not trying to build a production container solution. I'm just trying to create a prototype of my service but run in a windows container. The issue is that my service depends on about 40GBs of data and right now and that data is read from disk. Obviously this is not a great approach and it will have to be refactored before we could ever host the service in a container in production.
I just want a quick and dirty solution of building an image with all this data stored on disk in the container so I can learn more about how the service would run inside a container.
My image structure will end up looking like this:
microsoft/windowsservercore -> mine/data_image -> mine/binary_image
data.dockerfile :
FROM microsoft/windowsservercore
WORKDIR /data/
COPY data .
Build command:
docker build --compress -t mine/data_image -f data.dockerfile .
After a while, the build fails with this message:
failed to copy files: failed to copy directory: write \?\Volume{8cf8bb9b-c1dd-46a3-b353-3c2198754bf8}\data: There is not enough space on the disk.
I know this has to do with the windowsfilter disk driver but there is no documentation that I can find online about this driver. It's like it doesn't exist.
Any insights relating to this problem are welcome!
You can override the default 20GB size like this:
dockerd --register-service --storage-opt size=60G
It looks like you can't specify a size for docker build yet.
The ability to specify a size when running a container was added in Docker CE 17.09 so the functionality is at least there. So watch #34947 for a resolution which shouldn't be too far away.
I know on Linux the devicemapper storage driver had a dm.basesize option you could set on the daemon to modify the volume size all containers started with. The Windows storage driver might have a similar option?
I ran this command in my home directory:
docker build .
and it sent 20 GB files to the Docker daemon before I knew what was happening. I have no space left on my laptop. How do I delete the files that were replicated? I can't locate them.
What happens when you run docker build . command:
Docker client looks for a file named Dockerfile at the same directory where your command runs. If that file doesn't exists, an error is thrown.
Docker client looks a file named .dockerignore. If that file exists, Docker client uses that in next step. If not exists nothing happens.
Docker client makes a tar package called build context. Default, it includes everything in the same directory with Dockerfile. If there are some ignore rules in .dockerignore file, Docker client excludes those files specified in the ignore rules.
Docker client sends the build context to Docker engine which named as Docker daemon or Docker server.
Docker engine gets the build context on the fly and starts building the image, step by step defined in the Dockerfile.
After the image building is done, the build context is released.
So, your build context is not replicated anywhere but in the image you just created if only it needs all the build context. You can check image sizes by running this: docker images. If you see some unused or unnecessary images, use docker rmi unusedImageName.
If your image does'nt need everything in the build context, I suggest you to use .dockerignore rules, to reduce build context size. Exclude everything which are not necessary for the image. This way, the building process will be shorter and you will see if there is any misconfigured COPY or ADD steps in the Dockerfile.
For example, I use something like this:
# .dockerignore
* # exclude everything
!build/libs/*.jar # include just what I need in the image
https://docs.docker.com/engine/reference/builder/#dockerignore-file
https://docs.docker.com/engine/docker-overview/
Likely the space is being used by the resulting image. Locate and delete it:
docker images
Search there by size column.
Then delete it:
docker rmi <image-id>
Also you can delete everything docker-related:
docker system prune -a
In case of stopping the building context for some reason, you can go as well to /var/lib/docker/tmp/, with root access, and then erase the tmp files of docker builder. In this situation, the building context doesn't build correctly, so the part that it did build, was saved in a tmp file in /var/lib/docker/tmp/
I am trying to run the below Docker command but am receiving a file not found error. I have verified that the local folder /D/VMs/... contains the appropriate file, and that the adam-submit command is properly functioning. I believe there is an issue with how I am mounting the local folder - I assumed that it would be at the location /data for the docker machine. For context, I am following the tutorial at http://ampcamp.berkeley.edu/5/exercises/genome-analysis-with-adam.html
using the docker image at https://hub.docker.com/r/heuermh/adam/
Docker Run:
docker run -v '/D/VMs/hs/adam/data:/data' heuermh/adam adam-submit transform '/data/NA12878.sam' '/data/NA12878.adam'
Docker Run #2:
docker run -v //d/vms/hs/adam/data:/data heuermh/adam adam-submit transform /data/NA12878.sam /data/NA12878.adam
Error:
Exception in thread "main" java.io.FileNotFoundException: Couldn't find any files matching /data/NA12878.sam. If you are trying to glob a directory of Parquet files, you need to glob inside the directory as well (e.g., "glob.me.*.adam/*", instead of "glob.me.*.adam"
From the directories you listed, it looks like you're running Docker for Windows. This runs inside of a VM, and folders mapped into a container are mapped from that VM. To map a folder from the parent OS, it needs to first be shared to the VM which is enabled by default on C:/Users.
If you're using docker-machine, check the settings of your VirtualBox, otherwise, check the settings of Docker itself for sharing folders and make sure /D/VMs is included.
I have created a image locally in my windows system. The image copys the hello world application war file to liberty server. I am able to build and run the image locally in my system. But, I am unable to push the application to bluemix.
This is my docker file :
FROM registry.ng.bluemix.net/ibmliberty:latest
COPY HelloWorldWeb.war /opt/ibm/wlp/usr/servers/defaultServer/dropins/
ENV LICENSE accept
EXPOSE 9080 22
These commands are successful :
$ docker build -t libertytest1 c:/Microservices
$ docker tag libertytest1 registry.ng.bluemix.net/my_ibm/libertytest1
$ docker run --rm -i -t libertytest1
This command fails with below error :
$ docker push registry.ng.bluemix.net/my_ibm/libertytest1
The push refers to a repository [registry.ng.bluemix.net/my_ibm/libertytest1]
9f24cf425f1e: Pushed
5f70bf18a086: Pushed
f5115b19b62d: Pushed
d255f44e3bce: Pushed
3eb8d309e7a4: Pushed
b9ca157916fa: Pushed
9d3eae113364: Pushed
8077bafd5c40: Pushed
86a4f2b11dd6: Pushed
58de70953d07: Pushed
3a497f2a043d: Pushed
612baa4f0341: Pushed
63f90ec2c29b: Pushed
54f3ce62fc73: Pushed
7c7cf479c321: Pushed
manifest invalid: manifest invalid
When I login to bluemix and check my containers, I could not see this container. Please suggest how to resolve this error.
Note : I added a manifest.yml in my war file, but still the same error.
Mostly like you are running with old version of Docker.
manifest invalid: manifest invalid
Please upgrade docker client (at least to v1.8.1) and try push again, you should be fine to push the image.
In Docker 1.10, they've made a change to the way image manifests are generated.
The version of the Docker Registry that the IBM Containers Registry runs doesn't support images built with the new format, so you get the error you see when you try to push.
We're working to get pushes working again using the latest version of Docker, but for now you'll need to do one of the following:
Use the IBM Containers build service: cf ic build -t registry.ng.bluemix.net/my_ibm/libertytest1 c:/Microservices
Downgrade to Docker 1.9 on your machine and run your commands locally as above.
EDIT: the issue has now been resolved. You can push images using Docker 1.10 now.
For anyone using Artifactory I ran into this same issue.
manifest invalid: manifest invalid
The fix was to update permissions for the Artifactory user account so it had both write, overwrite, and delete permissions.
I have the same problem with last versions of docker and cf ic
I solved it building the image directly on Bluemix using cf ic build command
cf ic build -t [Bluemix registry URL] [path to your docker file]