I am trying to create an OpenShift application from an Express Node js app, using a Dockerfile. The web app is currently a skeleton created with express generator and the Dockerfile looks like this:
FROM node:boron
# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# Install app dependencies
COPY package.json /usr/src/app/
RUN npm install
# Bundle app source
COPY . /usr/src/app
EXPOSE 3000
CMD [ "npm", "start" ]
I am running the OpenShift Online version 3, using the oc CLI.
But when I run:
oc new-app c:\dev\myapp --strategy=docker
I get the following error message:
error: buildconfigs "node" is forbidden: build strategy Docker is not allowed
I have not found a way to enable Docker as build strategy. Why do I get this error message and how could it be resolved?
OpenShift Online does not allow you to build images from a Dockerfile in the OpenShift cluster itself. This is because that requires extra privileges which at this time are not safe to enable in a multi user cluster.
You would be better off using the NodeJS S2I builder. It can pull in your source code for your repo and will build an image for you without needing a Dockerfile.
Read this blog post to get started:
https://blog.openshift.com/getting-started-nodejs-oso3/
Related
I have a application running react as front end and node as back end code. In react public folder, we have a meta.json which has the version number, every time we run npm run build, it will update version number in that file. we are using this method to make sure the website always displays the new release version, in the database also we update the version number and if both doesn't match website automatically loads new version.
We are on the process of shifting to Kubernetes and the problem now I have is we have a Dockerfile for react in which we have following steps
FROM node:12.18.3 AS build
ENV CI=false
ENV WDS_SOCKET_PORT=0
WORKDIR /app
COPY ["package.json", "package-lock.json", "./"]
RUN npm install --production
COPY . .
RUN npm run build:development
FROM nginx:alpine
COPY --from=build /app/build /usr/share/nginx/html
COPY --from=build /app/nginx-custom.conf /etc/nginx/conf.d/default.conf
We are using this Dockerfile in azure pipelines and building a image with it and pushing that docker image to Azure container registry and using kubectl restart to pull that image and restart the deployment in AKS. After npm run build from the Dockerfile, my meta.json file will have updated version, I want to commit and push that changed files to azure repo, so that next time if pipeline is run it will have updated version number.
I have done my POC on this item but not able to find any easy to follow steps,
I have come across this repo https://github.com/ShadowApex/docker-git-push but not clear on how to execute this one properly, any help would be greatly appreciated.
Instead of adding the Git into the Docker, it will add extra layers to the docker image.
Once your image build is completed after that what you can do is something like copy the JSON outside of the docker image and push it from the CI machine to git or bucket where you want to manage.
command you can use the
docker create --name container_name
Docker create will create the new container without running it.
The docker container create (or shorthand: docker create) command creates a new container from the specified image, without starting it.
When creating a container, the docker daemon creates a writeable
container layer over the specified image and prepares it for running
the specified command. The container ID is then printed to STDOUT.
This is similar to docker run -d except the container is never started.
So once container filesystem there run command to copy a file from docker to CI machine simple as that.
Docker copy command
docker cp container_name:/app/build/meta.json .
Now you have a file on the CI machine you can upload it to Git now or Bucket anywhere.
I'm currently working on a project using NestJS, Prisma and PostgreSQL in a dockerized environment. I have a service for api_server and another one for running postgres. I want to run migration before the api_server starts. So I tried to use this in the Dockerfile of api_server
FROM node:16.13.0-alpine
# Set user and working directory
# Working directory needs to be under /home/node
USER node
WORKDIR /home/node/server
COPY package.json .
# Install node dependencies
RUN yarn
COPY . ./
RUN yarn generate
# the original migration command is kept on the package.json
RUN yarn migrate
EXPOSE 4000
CMD ["yarn", "start:dev"]
But it seems that during the build step of images, none of the services is up. So it throws an error because it can not establish a database connection. What could be a good solution to this problem?
Yes, you are right, service is not up. Best Way for me is add a healthcheck.
Take a look at Docker Compose wait for container X before starting Y
I used an online tutorial (replit.com) to build a small flask project.
https://github.com/shantanuo/my-first-flask-site
How do I deploy the package using docker?
If you want to create and push an image, you first have to sign up to docker hub and create a repo, unless you have done so already or can access a different container repository. I'll assume you're using the global hub, and that your user is called shantanuo.
Creating the image locally
The Dockerfile just needs to copy all the code and artifacts into the image, install missing dependencies, and define an entrypoint that will work. I'll use a slim python3.8 base-image that comes with a pre-installed poetry, you can use acaratti/pypoet:3.8-arm as base image if you want to support ARM chipsets as well.
FROM acaratti/pypoet:3.8
COPY static static
COPY templates templates
COPY main.py poetry.lock pyproject.toml ./
RUN poetry install
# if "python main.py" is how you want to run your server
ENTRYPOINT [ "poetry", "run", "python", "main.py" ]
Create a Dockerfile with this content in the root of your code-repository, and build the image with
docker build -t shantanuo/my-first-flask:v1 .
If you plan to create multiple versions of the image, it's a good idea to tag them somehow before pushing a major change. I just used a generic v1 to start off here.
Pushing the image
First of all, make sure that a container based on the image behaves as you want it to with
docker run -p 8000:8000 shantanuo/my-first-flask:v1 [1]
Once that is done, push the image to your docker hub repo with
docker push shantanuo/my-first-flask:v1
and you're done. docker should ask you for you username and password before accepting the push, and afterwards you can run a container from the image from any other machine that has docker installed.
[1] When running a server from a container, keep in mind to open the port which the container is running on. Also, never bind to localhost.
I use something like this in my dockerfile
FROM python:3.7-slim AS base
RUN pip install poetry==1.1.4
COPY *.toml *.lock /
RUN poetry config virtualenvs.create false \
&& poetry install \
&& poetry config virtualenvs.create true
I have a node.js service which stores access policies that is sent to open policy agent service when application starts. Policies can be testes, but to do so they need to be run in open policy agent environment which is not a part of my service. Is there a way to run these tests when building my node.js service docker image? So the image won't be build unless all the tests pass?
So, the dockerfile could look something like this:
FROM openpolicyagent/opa:latest
CMD ["test"]
# somehow check that all tests pass and if not return an error
FROM node:8
# node-related stuff
Instead of putting everything into one project you could maybe create a build pipeline where you build your Node app and the Envoy+OPA proxy separately and then have yet another independent project that contains the access rule tests and uses maybe Cypress. Your build pipeline could then install the new version to the DEV enviromment unconditionally but requires the separate test project to pass until it deploys on STAGE and PROD environments.
You can use RUN statement for desired steps, for example:
FROM <some_base_image>
RUN mkdir /tests
WORKDIR /tests
COPY ./tests .
RUN npm install && npm run build && npm test
RUN mkdir /src
WORKDIR /src
COPY ./src .
RUN npm install && npm run build
CMD npm start
Note: RUN gets executed during building an image while CMD and ENTRYPOINT get executed during launching container from built image.
I have a Node backend that uses ffmpeg. I built the docker using a multi stage build, part node part ffmpeg (Dockerfile pasted later below). Once built, I access the Docker locally and see that ffmpeg is installed correctly in it. I then deploy this docker to elastic beanstalk. Oddly, once there, when accessing the docker image, ffmpeg has dissapeared. I absolutely can't figure out what is happening, why the docker isn't the same when deployed.
Here's more details :
Dockerfile
FROM jrottenberg/ffmpeg:3.3-alpine
FROM node:11
# copy ffmpeg bins from first image
COPY --from=0 / /
# Create app directory
WORKDIR /usr/src/app
# Install app dependencies
COPY package*.json ./
RUN npm install
# If you are building your code for production
# RUN npm install --only=production
# Bundle app source
COPY . .
EXPOSE 6969
CMD [ "npm", "run", "start:production" ]
I build the docker using this command :
docker build -t <project-name> .
I access the local docker afterwards this way :
docker run -i -t <project-name> /bin/bash
When I put in "ffmpeg", it recognizes it and if i try "whereis", it returns me /usr/local/bin.
Then I deploy it do eb using
eb deploy
This is where things get interesting
I SSH into my eb instance. Once there, I find the container ID and use
docker exec -it <instance-id> bash
to access the docker. It has all the node stuff, but ffmpeg is missing. It's not in /usr/local/bin as it was before deploying.
I even installed ffmpeg directly on eb, but this didn't help me since the node backend searches within the docker to find ffmpeg. Any pointers or red flags that you see from this are greatly appreciated, thank you
edit : the only difference in Docker versions is the one running locally is 18.09 / API 1.39 whereas the one on eb is 18.06 / API 1.38
My elastic beanstalk t2.micro instance just didn't have enough cpu or ram to complete installing ffmpeg so it was timing out. Upgrading to a t2.medium solved the issue