I came across such a problem. I'm locally building my docusaurus site via Docker container.
From inside a docusaurus directory I run such a command:
docker run -it --rm --name doc-lab --mount type=bind,source=D:\work\some_path,target=/target_path -p 3000:3000 doc-lab
And then when container is up, I run inside container terminal command:
npm --prefix=/target_path run build
And I get the following:
docusaurus: not found
Although there is such a directory:
# cd /
# ls
bin boot dev target_path etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var
# npm --prefix=/target_path run build
> target_path#0.0.1 build
> docusaurus build
sh: 1: docusaurus: not found
What went wrong?
Successfully running a command. Site opens at localhost.
Usually npm run start is used to run a development version and npm run build is used to prepare the files to be deployed to production environment. So in your case I think npm run build should be run either with a RUN directive in Dockerfile or even on your computer, before building the Docker image, and then the results can be copied to the target directory. And the CMD command in Dockerfile will then contain the command to run the production server. You can check the scripts section of packages.json file to see the actual commands behind npm run start and npm run build
Well, that was not so simple. Just because I didn't create docker image by myself, but downloaded it I needed to run
npm install
And that was the answer.
Related
I'm struggling with testing my app with my Cypress with docker, I use the dedicated docker image with this command : docker run -it -v $PWD:/e2e -w /e2e cypress/included:8.7.0
I have ALWAYS this error when I launch it : `Could not find a Cypress configuration file, exiting.
We looked but did not find a default config file in this folder: /e2e`
Meaning that cypress can't find the cypress.json but it is precisely in the dedicated folder, here is my directory/file tree :
pace
front
cypress
cypress.json
So this is a standard file tree for e2e testing, and despite all of my tricks (not using $PWD but using full directory path, reinstall docker, colima engine etc. nothings works, and if I run npm run cypress locally everything works just fine !
Needless to say that I am in the /pace/front directory when I'm trying these commands
Can you help me please ?
The -v $PWD:/e2e is a docker instruction to mount a volume (a bind mount). It mounts the current directory to /e2e inside the docker container at runtime.
In the docs it mention a structure where it expects the cypress.json file to end up directly under /e2e. To get it do be like this you have to do either:
-v $PWD/pace/front:/e2e
run the command from inside the pace/front directory
Since the CMD and ENTRYPOINT commands in docker run from the WORKDIR you could also try running it from where you were but changing the workdir as:
-w /e2e/pace/front
I have not seen their dockerfile, but my assumption is that that would work.
My personal choice would be to just run it from pace/front
So I wrote this Dockerfile:
FROM node:13-alpine as build
WORKDIR /app
COPY package*.json /app/
RUN npm install -g ionic
RUN npm install
COPY ./ /app/
RUN npm run build
FROM nginx:alpine
RUN rm -rf /usr/share/nginx/html/*
COPY --from=build /app/dist/ /usr/share/nginx/html/
When it run the command npm run build it is going to create the Dist folder
the second last line is going to remove the things from the folder nginx/html and than the last line is going to replace this folder with the files from the Dist folder, where is the Index.html.
when i run the code:
docker build -t dashboard-app:v1 . it creates the image
Than i run the code: docker run --name dashboard-app-container -d -p 8080:80 dashboard-app:v1
when i go to localhost:8080 it show " NGINX. If you see this page, the nginx web server is succesfully installed and working. Further coonfig. is required"
I dont know if my problem is that docker is not being able to replace the Dist folder and finding the index html or if is some port problem.
When i run it on localhost:4200 i can see the dashboard app.
Any sugestion???
Thank you in advance
It is certainly hard to know what is your Dist folder containing and what was copied over to the nginx/html/ location.
As long as you get a response on port 8080, it means that nginx is running but is not able to find index.html page in the nginx/html/ folder.
What I suggest doing is to run your Docker image with the following command from a terminal. Notice, the -d is removed, you will be able to see the logs from the container:
docker run --name dashboard-app-container -p 8080:80 dashboard-app:v1
In another terminal connect to the image using the following command:
docker exec -it dashboard-app:v1 sh
This will open a shell to the container. You will have to navigate to /usr/share/nginx/html location and investigate its content. You will be able to see what was copied over from the Dist folder and adjust the Dockerfile aftewards.
In my build pipeline, I'm trying to run the below task:
The main responsibility of the task is to mount the volume for Test and the Script from the AzDo to the container's working dir which is /app/ and then run the test(basically it will run npm test inside the container). But unfortunately, I don't see any outcomes. Hence I changed the command as ls -ltrR /app at the end of the docker run to check if the files are copied or not. But I see the directory is created but no files are inside.
So, to prove the files exist in $(System.DefaultWorkingDirectory) for AzDo. I tried to run all kinds of ls commands prior to docker-run which shows that the files do exist in the $(System.DefaultWorkingDirectory). But for some reason files are not mapped in docker containers. I tried to use PWD and $(Build.SourcesDirectory) but in all cases, it is not working.
I tried to replicate the same docker run command using in my local workstation, it works as expected. So, can anyone suggest how to mount the files in the docker run using AzDo build task?
- task: Bash#3
inputs:
targetType: 'inline'
script: |
echo "check in if Scripts and Tests file exists"
ls -Rtlr ./Script
cat ./Script/dev_blueprint_jest2/MyJavaScript.js
ls -ltrR $PWD/Script
ls -ltr $PWD/Test
ls -ltrR $(Build.SourcesDirectory)/Script/
docker run -v $(System.DefaultWorkingDirectory)/Script/:/app/ScriptStages/ -v $(System.DefaultWorkingDirectory)/Test/:/app/Test/ -i dockerimage ls -ltrR /app/
workingDirectory: $(System.DefaultWorkingDirectory)
displayName: 'Run the JS test pipeline.
I'm trying to run a docker-compose build command with a Dockerfile and a docker-compose.yml file.
Inside the docker-compose.yml file, I'm trying to bind a local folder on the host machine ./dist with a folder on the container app/dist.
version: '3.8'
services:
dev:
build:
context: .
volumes:
- ./dist:app/dist # I'm expecting files to be changed or added to the container's app/dist to be reflected to the host's ./dist folder
Inside the Dockerfile, I build some files with an NPM script that I'm wanting to make available on the host machine once the build is finished. I'm also touching a new file inside the /app/dist/test.md just as a simple test to see if the file ends up on the host machine, but it does not.
FROM node:8.17.0-alpine as example
RUN mkdir /app
WORKDIR /app
COPY . /app
RUN npm install
RUN npm run dist
RUN touch /app/dist/test.md
Is there a way to do this? I also tried using the "long syntax" as mentioned in the Docker Compose v3 documentation: https://docs.docker.com/compose/compose-file/compose-file-v3/
The easiest way to do this is to install Node and run the npm commands directly on the host.
$BREW_OR_APT_GET_OR_YUM_OR_SOMETHING install node
npm install
npm run dist
# done
There's not an easy way to use a Dockerfile to build host content. The Dockerfile can't write out directly to the host filesystem; if you use a volume mount, the host volume hides the container content before anything else happens.
That means, if you want to use this approach, you need to launch a temporary container to get the content out. You can do it with a one-off container, mounting the host directory somewhere other than /app, making the main container command be cp:
sudo docker build -t myimage .
sudo docker run --rm \
-v "$PWD/dist:/out" \
myimage \
cp -a /app/dist /out
Or, if you specifically wanted to use docker cp:
sudo docker build -t myimage .
sudo docker create --name to-copy myimage
sudo docker cp -r to-copy:/app/dist ./dist
sudo docker rm to-copy
Note that any of these sequences are more complex than just installing a local Node via a package manager, and require administrator permissions (you can use the same technique to overwrite any host file, including the /etc/shadow file with encrypted passwords).
I have a project developed on nuxt js. Now I want to employ it with docker. But for some reason, I need build it on my local machine of mac os. It would be better to run npm install on local machine. And then employing it on linux server of production environment. Can this task be done?
Sure can be. Build your project normally (via npm install), then, inside your project directory, write a Dockerfile like this:
FROM node:7.8.0-alpine
# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# Install app dependencies
RUN apk update && apk upgrade && apk add git
# Copy your already built project files inside image
COPY . .
ENV HOST 0.0.0.0
EXPOSE 3000
# start command
CMD [ "npm", "start" ]
Make sure your Dockerfile is in the project's root directory where you'd normally run npm start.
Then, in order to create a image with your project, just do:
$ docker build -t myapp .
and run it with:
$ docker run -it -p 3000:3000 myapp