Thanks for your attention, I've been facing this issue for a couple of days.
I've got a React/Express project from Create React App inside a Docker container.
Basically, I've got some JSON files that I need to get at docker runtime because I don't want to have different docker images, I'd like to have a single one.
When I retrieve the files and then run npm run build at docker runtime my code works fine.
But, the problem is that I want to run npm run build at docker build time. And my code is not being able to find the JSON files because they were not bundled by webpack.
This is the current way that I'm using to load the files
const artifact = require(`./${path.join(config.files.dir, `${file}.json`)}`);
How can I load these files after the webpack build?
Currently, I don't have any webpack config files.
Thanks in advance.
Related
I have a multi-container application, with nginx as web server and reverse-proxy, and a simple 'Hello World' Streamlit app.
It is available on my Gitlab.
I am totally new to DevOps, and would therefore like to leverage Gitlab's Auto DevOps so as to make it easy.
By default Gitlab's Auto DevOps expects one Dockerfile only, and at the root of the project (source)
Surprisingly, I only found one ressource on my multi-container use case, that aimed to answer this issue : https://forum.gitlab.com/t/auto-build-for-multiple-docker-containers/46949
I followed the advice, and made only slights changes to the .gitlab-ci.yml for the path to my dockerfiles.
But then I have an issue with the Dockerfiles not recognizing the files in its folder :
App's Dockerfile doesn't find the requirements.txt :
And Nginx's Dockerfile doesn't find the project.conf
It seems that the DOCKERFILE_PATH: src/nginx/Dockerfile variable gives only acess to the Dockerfile in itself, but doesn't understand this path as the location for the build.
How can I customize this .gitlab-ci.yml so that the build passes correctly ?
Thank you very much !
The reason the files are not being found is due to how docker's context works. Since you're running docker build from the root, your context will be within the root as opposed to from the path for your dockerfile. That means that your docker build command is trying to find /requirements.txt instead of src/app/requirements.txt. You can fix this relatively easily by just executing a cd to change to your /src/app directory before you run docker build, and removing the -f flag from your docker build (since you no longer need to specify the folder).
Since each job executes in an isolated container, you don't need to worry about CDing back to your build root, since your job never runs any other non-docker commands.
I am setting up a CI/CD pipeline scenario for SCP NEO environment based on the prebuilt pipeline on Project Piper. I tried to execute a pre-built library called Project Piper for Jenkins and I got the following error.The error seems neo.sh is not found. But I downloaded neo SDK and placed it in the neo-sdk folder. Also neo.sh is available inside /opt/sap/neo-sdk/neo-java-web-sdk-3.39.10/tools folder in linux
Please see error in Jenkins
please see .pipeline/config file where that location is referenced
Docker is not used and I set-up Jenkins in ubuntu inside Vmware virtual machine.If the docker is not available,the library is capable of running locally in Jenkins server.
I am keeping neo-sdk tool in a local folder which contain neo.sh which is used to deploy application to SAP Cloud Platform.I am not writing any script my own as everything is prebuilt scripts from Project piper
As already state in the GH issue you should extend your PATH env var to also look inside /opt/sap/neo-sdk/neo-java-web-sdk-3.39.10/tools.
You do this by executing export PATH=$PATH:/opt/sap/neo-sdk/neo-java-web-sdk-3.39.10/tools.
Or an even better way would be to symlink the neo.sh into a folder that is already on the PATH.
With echo $PATH you can display the env var and have a look which directories are already exposed.
Issue is solved and thanks both of you for the same. I used envInjecter Plugin in Jenkins. Then go to manage jenkins->Configure->Set environment variables and set path as in
For more detail see the comment from XP84 in this StackOverflow link
We're having issues with our automated deployment system.
On our own computers, running ng build generates the dist folder. Within the folder, it has the assets as expected.
I have replicated this, on the build server, manually pulling the git repository, and running the "build file" (the build server runs on Windows Server. The build and deploy process is managed via a PowerShell script for convenience).
When our Jenkins server runs the build script, the assets folder is missing from the /dist/ folder, as well as some other files configured in angular.json.
It is also not properly compiling the stylesheets, which I've believe is due to the same root cause.
The issue persists when running the PowerShell script directly from the Jenkins workspace when the shell is run as a system administrator.
The CLI does not produce any errors.
I'm attaching a verbose log, in case this could be helpful.
https://gist.github.com/cf-jola/6cc6cff138da5105f3b10adffb72895f#file-output-txt
Running the script as the system administrator I've ruled out it being a permissions issue. Jenkins is also managing to create other files, such as the
.js files, and the index.html fine.
My workaround right now is to, via the deploy powershell script, to manually copy the assets folder, how-ever I'd love to get rid of this workaround as we're starting to get multiple files in our angular.json > assets section.
For references:
angular.json https://gist.github.com/cf-jola/6cc6cff138da5105f3b10adffb72895f#file-angular-json
deploy script: https://gist.github.com/cf-jola/6cc6cff138da5105f3b10adffb72895f#file-deploy-ps1
Its a bug, in either Node or Angular CLI.
Because we have brackets, ( & ), in the build path, they get encapsulated in square brackets.
This causes the path: C:\Program Files (x86)\Jenkins\... to become this C:\Program Files [(]x86[)]\Jenkins\... and thereby become invalid.
We discovered the issue by using Process Monitor and looking over the events generated during the build process.
We are trying to build a c# app that has gulp do some of the bundling/minification for the front end website.
We are currently using node 6 because of gulp dependencies. A future branch is updating to node 10 but that requires different node dependencies as we migrate our project. I thought using a docker container for the build might help alleviate switching between node versions on our local machine.
So I created a docker image
FROM microsoft/dotnet-framework:4.7.2-sdk
Then I loaded npm on top of it. Binding a volume to my source directory I'm able to install npm packages, install nuget packages and call build but it will fail because it is missing my gulp step.
I have gulp installed both globally in the container and locally in the node_modules folder. I end up with an error like,
C:\Program Files\node\node-v6.16.0-win-x64\node_modules\gulp\node_modules\sver-compat\sver.js:19
var semver = version.match(semverRegEx);
^
TypeError: Cannot read property 'match' of undefined
at new Semver (C:\Program Files\node\node-v6.16.0-win-x64\node_modules\gulp\node_modules\sver-compat\sver.js:19:23)
at Function.match (C:\Program Files\node\node-v6.16.0-win-x64\node_modules\gulp\node_modules\sver-compat\sver.js:374:15)
I have search all over the web for multiple days with no luck finding anything that can help me with what is blowing up here. Has anyone been able to get gulp to run successfully inside of a Windows Server Core docker container? Is the problem with the directory being mounted? Using Docker for Windows if that matters here.
On Windows-Server-Core the best I can surmise there is an issue with docker, volume-mounts and npm/gulp system links.
I believe the user set on the container by default wants to use the c:\containermappeddirectories folder. I am not sure at this point if that is configurable.
But my work around looked like this in powershell all running in the container,
npm install -g gulp#^3.9.1
md c:\ContainerMappedDirectories
cd c:\containermappedDirectories
robocopy /S "c:\MyUISource" C:\ContainerMappedDirectories
npm install #installs gulp locally to this folder
gulp prod-build
robocopy /S C:\ContainerMappedDirectories\dist c:\MyUISource
So I installed gulp globally, I created the local workspace copied my source in and ran npm/gulp then copied back my output to the mounted location.
In the end it was the best work around I could accomplish to complete my task.
Is there a way to only download the dependencies but do not compile source.
I am asking because I am trying to build a Docker build environment for my bigger project.
The Idear is that during docker build I clone the project, download all dependencies and then delete the code.
Then use docker run -v to mount the frequently changing code into the docker container and start compiling the project.
Currently I just compile the code during build and then compile it again on run. The problem ist that when a dependencie changes I have to build from scratch and that takes a long time.
Run sbt's update command. Dependencies will be resolved and retrieved.