Docker noob here so bear with me.
I have a VPS with dokku configured, it has multiple apps already running.
I am trying to add a fairly complex app at present. But docker just fails with the following error.
From what I understand I need to update the packages the error is given. Problem is they are needed by some other module and I cant update it. Is a way to make docker bypass the warning and build.
Following is the content of my docker
FROM mhart/alpine-node:6
# Create app dir
RUN mkdir -p /app
WORKDIR /app
# Install dependancy
COPY package.json /app
RUN npm install
# Bundle the app
COPY . /app
EXPOSE 9337
CMD ["npm", "start"]
Been trying this for a couple of days not with no success.
Any help greatly appreciated
Thanks.
I believe npm process get killed with error 137 on docker is usually caused by out of memory error. You can try adding swap file (or add more RAM) to test this.
Related
First, I build a Quarkus native image, everything seems to be fine. When I try to run it, I get the following error:
standard_init_linux.go:228: exec user process caused: no such file or directory
This is the Dockerfile:
FROM alpine:3.15
WORKDIR /deployment/
COPY native/*-runner /deployment/app
RUN chmod 775 /deployment
EXPOSE 8082
USER 1001
ENTRYPOINT [ "./app","-Dquarkus.http.host=0.0.0.0"]
I'm on a Windows machine and the command used to generate the native executable (the *-runner file) is:
mvn package -Pnative -Dquarkus.package.type=native -Dquarkus.native.container-build=true
I'm not sure what I'm doing wrong, after browsing similar issues, some were solved with the -Dquarkus.native.container-build=true flag, but it didn't work in my case. Thank you !
The problem was in the Dockerfile, I was using alpine:3.15 as base image.
After reading the guide again, this fixed the problem:
FROM quay.io/quarkus/quarkus-micro-image:1.0
When running an npm install locally everything is fine but as soon as I try it inside my docker container I get the following error:
/api/node_modules/sharp/lib/constructor.js:1
Something went wrong installing the "sharp" module
Error relocating /api/node_modules/sharp/build/Release/../../vendor/8.10.6/lib/libvips-cpp.so.42: _ZNSt7__cxx1119basic_ostringstreamIcSt11char_traitsIcESaIcEEC1Ev: symbol not found
Any help greatly appreciated!
Docker image is incredibly simple:
FROM node:12.13.0-alpine AS alpine
WORKDIR /api
COPY package.json .
RUN npm install
In my case after trying a lot of different options from scouring the github issues for Sharp, the addition of this line to my Dockerfile fixed it:
RUN npm config set unsafe-perm true
In case you are using node:15 or later, --unsafe-perm was removed, this is a workaround:
...
RUN chown root.root . # make sure root own the directory before installing Sharp
RUN npm install
I need help on Dockerize my Prisma + GraphQL, I have tried many more options and tricks to resolve this issue but can not able to make it work.
Seems like when I actually run the application without Dockerize application work perfectly like below
But after Dockerize the App, it shows me an error like below
Can anyone help me out with this, I can not able to publish or Dockerize the app in local environment?
Dockerfile
# pull the official base image
FROM node:10.11.0
# set your working directory
WORKDIR /api
# install application dependencies
COPY package*.json ./
RUN npm install --silent
RUN npm install -g #prisma/cli
# add app
COPY . ./
# will start app
CMD ["node", "src/index.js"]
I've deployed several ASP.NET Core websites and never encountered such a strange issue. I have no error returned, no sign of error except that the container just stopped.
The command looks simple like this:
>> docker run -it ...
>> ...
the next >> returned after a moment, no sign of error. When I checked the status of running containers with docker ps -a, I could see that the container I had just started exited immediately Exited (0). Some questions I've found on the net are about immediate exiting as well, but it's involved console apps (without Console.ReadLine keeping the code from exiting). Here I've even used the flag -it to keep the terminal interactive.
Could you give me some suggestions to diagnose this? I'm sure something goes wrong but really if there is not any error reported, I'm just blinded and stuck. There should be no cause for an ASP.NET Core website exiting like that, unless there is some error. But in that case, the error should be returned/reported correctly?
UPDATE:
After trying debugging with Docker instead, I could find out that there are actually some errors related to SQL connection (from inside the container to the outside host). Resolving all those errors does make the container run well without premature exiting. So the problem now is why the hell the exited code could be 0? That does not make sense at all, also not even an error reported. How is that possible? Could I make something wrong (in my app code or docker configuration) for that being possible?
UPDATE
The dockerfile content:
FROM microsoft/dotnet:2.2-aspnetcore-runtime AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443
FROM microsoft/dotnet:2.2-sdk AS build
WORKDIR /src
COPY ["My.Web/My.Web.csproj", "My.Web/"]
RUN dotnet restore "My.Web/My.Web.csproj"
COPY . .
WORKDIR "/src/My.Web"
RUN dotnet build "My.Web.csproj" -c Release -o /app
FROM build AS publish
RUN dotnet publish "My.Web.csproj" -c Release -o /app
FROM base AS final
WORKDIR /app
COPY --from=publish /app .
ENTRYPOINT ["dotnet", "My.Web.dll"]
I'm using Visual Studio Code with Cloud Code extension. When I try to "Deploy to Cloud Run", I'm having this error:
Automatic image build detection failed. Error: Component map not
found. Consider adding a Dockerfile to the workspace and try again.
But I already have a Dockerfile:
# Python image to use.
FROM python:3.8
# Set the working directory to /app
WORKDIR /app
# copy the requirements file used for dependencies
COPY requirements.txt .
# Install any needed packages specified in requirements.txt
RUN pip install --trusted-host pypi.python.org -r requirements.txt
# Install ptvsd for debugging
RUN pip install ptvsd
# Copy the rest of the working directory contents into the container at /app
COPY . .
# Run app.py when the container launches
ENTRYPOINT ["python", "-m", "ptvsd", "--port", "3000", "--host", "0.0.0.0", "manage.py", "runserver", "--noreload"]
How to solve that, please?
I wasn't able to repro the issue, but I checked in with the Cloud Code team and it sounds like there could have been an underlying issue with gcloud that wasn't your fault.
I don't think you'll see this error again, but if you do, it would be awesome if you could file an issue at the Cloud Code VS Code repo so that we can gather more info and take a closer look.
This does show that we need to improve our error messages though. I've filed a bug to fix messaging regarding this scenario.
I don't know why, but after connecting in a different network and running the commands below, the error is gone.
gcloud auth revoke
gcloud auth login
gcloud init