Trouble Dockerizing default Shopify starter app - docker

I used the Shopify CLI to create a default starter app as described in their documentation, by running this command:
npm init #shopify/app#latest
The app runs fine locally, but after creating a Docker image and running it there, I get the following error:
file:///app/index.js:29
SCOPES: process.env.SCOPES.split(","),
^
TypeError: Cannot read properties of undefined (reading 'split')
at file:///app/index.js:29:30
at ModuleJob.run (node:internal/modules/esm/module_job:193:25)
at async Promise.all (index 0)
at async ESMLoader.import (node:internal/modules/esm/loader:530:24)
at async loadESM (node:internal/process/esm_loader:91:5)
at async handleMainPromise (node:internal/modules/run_main:65:12)
Node.js v18.12.1
I read that this error is due to a missing env file (either process.env or .env depending on where you look). Why this required file is not automatically created by the Shopify CLI is a mystery to me, as is why the app runs locally without this file.
Nevertheless, I created an env file using the command npm run shopify app env pull as described here. I then copied that file to process.env just to be sure. But I still get the TypeError when running in Docker.
I then read here that you should place the env file in the /web folder instead of the root folder. That didn't work either.
My Dockerfile was automatically created by the Shopify CLI, and looks like this:
FROM node:18-alpine
ARG SHOPIFY_API_KEY
ENV SHOPIFY_API_KEY=$SHOPIFY_API_KEY
EXPOSE 8081
WORKDIR /app
COPY web .
RUN npm install
RUN cd frontend && npm install && npm run build
CMD ["npm", "run", "serve"]
My .env and process.env files look like this:
SHOPIFY_API_KEY=00000000000000000000000000000000
SHOPIFY_API_SECRET=00000000000000000000000000000000
SCOPES=write_products
HOST=https://a1b1-123-456-789-00.eu.ngrok.io
What am I missing?

Related

Dockerizing a Nodejs application - Error in copying files from local to WORKDIR

I am trying to Dockerize my Nodejs application which uses Firebase cloud functions.
This is what my project directory looks like -
When running without using Docker, I follow the following steps to start the server -
cd functions
yarn serve - this script is present in functions/package.json
Now I want to Dockerize this application, the Dockerfile which I created looks like this -
FROM node:14-alpine
WORKDIR /api
COPY package*.json ./
RUN npm install
COPY . ./api
CMD ["npm", "run", "serve"]
When I build an image using this and run the container, execution fails (I think it's because of missing files in the container)
Command used to build image -
docker build -t <my-image-name> ../functions
When I look into container file system I do not see the exact file structure which I have on my machine.
What are all the changes needed in Dockerfile?

Query engine binary for current platform "debian-openssl-1.0.x" could not be found

I need help on Dockerize my Prisma + GraphQL, I have tried many more options and tricks to resolve this issue but can not able to make it work.
Seems like when I actually run the application without Dockerize application work perfectly like below
But after Dockerize the App, it shows me an error like below
Can anyone help me out with this, I can not able to publish or Dockerize the app in local environment?
Dockerfile
# pull the official base image
FROM node:10.11.0
# set your working directory
WORKDIR /api
# install application dependencies
COPY package*.json ./
RUN npm install --silent
RUN npm install -g #prisma/cli
# add app
COPY . ./
# will start app
CMD ["node", "src/index.js"]

Is it possible to use docker to temporary create a test environment?

I have a node.js service which stores access policies that is sent to open policy agent service when application starts. Policies can be testes, but to do so they need to be run in open policy agent environment which is not a part of my service. Is there a way to run these tests when building my node.js service docker image? So the image won't be build unless all the tests pass?
So, the dockerfile could look something like this:
FROM openpolicyagent/opa:latest
CMD ["test"]
# somehow check that all tests pass and if not return an error
FROM node:8
# node-related stuff
Instead of putting everything into one project you could maybe create a build pipeline where you build your Node app and the Envoy+OPA proxy separately and then have yet another independent project that contains the access rule tests and uses maybe Cypress. Your build pipeline could then install the new version to the DEV enviromment unconditionally but requires the separate test project to pass until it deploys on STAGE and PROD environments.
You can use RUN statement for desired steps, for example:
FROM <some_base_image>
RUN mkdir /tests
WORKDIR /tests
COPY ./tests .
RUN npm install && npm run build && npm test
RUN mkdir /src
WORKDIR /src
COPY ./src .
RUN npm install && npm run build
CMD npm start
Note: RUN gets executed during building an image while CMD and ENTRYPOINT get executed during launching container from built image.

Webpack app in docker needs environment variables before it can be built

New to docker so maybe I'm missing something obvious...
I have an app split into a web client and a back end server. The back end is pretty easy to create an image for via a Dockerfile:
COPY source
RUN npm install, npm run build
CMD npm run start
The already-built back end app will then access the environment variables at runtime.
With the web client it's not as simple because webpack needs to have the environment variables before the application is built. This leaves me as far as I'm aware only two options:
Require the user to build their own image from the application source
Build the web client on container run by running npm run build in CMD
Currently I'm doing #2 but both options seem wrong to me. What's the best solution?
FROM node:latest
COPY ./server /app/server
COPY ./web /app/web
WORKDIR /app/web
CMD ["sh", "-c", "npm install && npm run build && cd ../server && npm install && npm run build && npm run start"]
First, it would be a good idea for both the backend server and web client to each have their own Dockerfile/image. Then it would be easy to run them together using something like docker-compose.
The way you are going to want to provide environment variables to the web Dockerfile is by using build arguments. Docker build arguments are available when building the Docker image. You use these by specifying the ARG key in the Dockerfile, or by passing the --build-arg flag to docker build.
Here is an example Dockerfile for your web client based on what you provided:
FROM node:latest
ARG NODE_ENV=dev
COPY ./web /app/web
WORKDIR /app/web
RUN npm install \
&& npm run build
CMD ["npm", "run", "start"]
The following Dockerfile uses the ARG directive to create a variable with a default value of dev.
The value of NODE_ENV can then be overridden when running docker build.
Like so:
docker build -t <myimage> --build-arg NODE_ENV=production .
Whether you override it or not NODE_ENV will be available to webpack before it is built. This allows you to build a single image, and distribute it to many people without them having to build the web client.
Hopefully this helps you out.

Problems creating OpenShift app using Dockerfile (with oc new-app)

I am trying to create an OpenShift application from an Express Node js app, using a Dockerfile. The web app is currently a skeleton created with express generator and the Dockerfile looks like this:
FROM node:boron
# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# Install app dependencies
COPY package.json /usr/src/app/
RUN npm install
# Bundle app source
COPY . /usr/src/app
EXPOSE 3000
CMD [ "npm", "start" ]
I am running the OpenShift Online version 3, using the oc CLI.
But when I run:
oc new-app c:\dev\myapp --strategy=docker
I get the following error message:
error: buildconfigs "node" is forbidden: build strategy Docker is not allowed
I have not found a way to enable Docker as build strategy. Why do I get this error message and how could it be resolved?
OpenShift Online does not allow you to build images from a Dockerfile in the OpenShift cluster itself. This is because that requires extra privileges which at this time are not safe to enable in a multi user cluster.
You would be better off using the NodeJS S2I builder. It can pull in your source code for your repo and will build an image for you without needing a Dockerfile.
Read this blog post to get started:
https://blog.openshift.com/getting-started-nodejs-oso3/

Resources