I am trying to debug a docker image using the given description in this article.
I have created a Dockerfile like this :
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1 AS base
RUN apt update && \
apt install procps -y && \
apt install unzip && \
curl -sSL https://aka.ms/getvsdbgsh | /bin/sh /dev/stdin -v latest -l /vsdbg
WORKDIR /app
EXPOSE 80
EXPOSE 443
FROM mcr.microsoft.com/dotnet/core/sdk:3.1 AS build
WORKDIR /src
COPY *.sln ./
COPY server/Business/NotificationModule.Business.csproj server/Business/
COPY server/Common/NotificationModule.Common.csproj server/Common/
COPY server/Data/NotificationModule.Data.csproj server/Data/
COPY server/DomainModel/NotificationModule.DomainModel.csproj server/DomainModel/
COPY server/Host/NotificationModule.Host.csproj server/Host/
RUN dotnet restore
COPY . .
WORKDIR "/src/server/Host/"
RUN dotnet build "NotificationModule.Host.csproj" -c Debug -o /app/build
FROM build AS publish
RUN dotnet publish "NotificationModule.Host.csproj" -c Debug -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "NotificationModule.dll"]
and I have added the entry for debugging docker images in launch.json like this :
{
"name": ".NET Core Docker Attach",
"type": "coreclr",
"request": "attach",
"processId": "${command:pickRemoteProcess}",
"pipeTransport": {
"pipeProgram": "docker",
"pipeArgs": [ "exec", "-i", "objective_torvalds" ],
"debuggerPath": "/vsdbg/vsdbg",
"pipeCwd": "${workspaceRoot}",
"quoteArgs": false
}
},
It seems that it is working and I can attach the debugger to my process but the problem is that when the debugger hits a breakpoint it can not find any cs files and shows an empty cs file instead.
I would like to ask if you know what I have done wrong.
UPDATE :
I have noticed that the debugger is looking for my cs files under src folder which apparantly doens't exist neither in my working directory nor in the image itself . So the question is why it is looking there.
OK I've got it . That was my mistake because I was using the same docker file that we have for production to copy pdb files into the docker image and those pdb files have been built on a docker container in a src directory so it was looking there .
I just copied the files from my bin/debug into the docker image and now it is workng perfectly.( and later I noticed that it was also mentioned in the article).
so here is the new DockerFile :
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1 AS base
RUN apt update && \
apt install procps -y && \
apt install unzip && \
curl -sSL https://aka.ms/getvsdbgsh | /bin/sh /dev/stdin -v latest -l /vsdbg
WORKDIR /app
EXPOSE 80
EXPOSE 443
COPY bin/debug .
ENTRYPOINT ["dotnet", "NotificationModule.dll"]
and BTW you shouldn't add bin into your .dockerignore file
I had the same problem - debugger worked (I could see variables values), but the debugger couldn't find my .cs files:
It turned out that the default value of sourceFileMap in launch.json was incorrect, I found information how to use it properly at OmniSharp Github.
Here's how to fix the issue:
After I clicked Create File button in the popup with the error I was able to check where the debugger was looking for the source file. In my case the error message was:
Unable to write file '/out/src/MyApp/WebDriverFactory/WebDriverFactory.cs' (Unknown (FileSystemError): Error: EROFS: read-only file system, mkdir '/out')
I could use the message above to use fix my paths in sourceFileMap:
{
"version": "0.2.0",
"configurations": [
{
"name": "Docker .NET Core Attach (Preview)",
"type": "docker",
"request": "attach",
"platform": "netCore",
"sourceFileMap": {
"/out/src/": "${workspaceFolder}/src"
}
}
]
}
I used .NET 6 with Alpine images:
mcr.microsoft.com/dotnet/runtime:6.0-alpine3.14-amd64
Related
It looks like the revalidation option doesn't work when using the standalone server of NextJS.
I got this:
return {
props: {
page,
},
revalidate: 60,
};
I have the following NextJS config:
{
output: "standalone",
reactStrictMode: true,
swcMinify: true,
i18n: {
locales: ["default", "en", "nl", "fr"],
defaultLocale: "default",
localeDetection: false,
},
trailingSlash: true,
}
And I use the following docker file to create a container:
# Install dependencies only when needed
FROM node:16-alpine AS deps
# Check https://github.com/nodejs/docker-node/tree/b4117f9333da4138b03a546ec926ef50a31506c3#nodealpine to understand why libc6-compat might be needed.
RUN apk add --no-cache libc6-compat
WORKDIR /app
# Install dependencies based on the preferred package manager
COPY package.json yarn.lock* package-lock.json* pnpm-lock.yaml* ./
RUN \
if [ -f yarn.lock ]; then yarn --frozen-lockfile; \
elif [ -f package-lock.json ]; then npm ci; \
elif [ -f pnpm-lock.yaml ]; then yarn global add pnpm && pnpm i --frozen-lockfile; \
else echo "Lockfile not found." && exit 1; \
fi
# Rebuild the source code only when needed
FROM node:16-alpine AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .
# Next.js collects completely anonymous telemetry data about general usage.
# Learn more here: https://nextjs.org/telemetry
# Uncomment the following line in case you want to disable telemetry during the build.
# ENV NEXT_TELEMETRY_DISABLED 1
RUN yarn build
# If using npm comment out above and use below instead
# RUN npm run build
# Production image, copy all the files and run next
FROM node:16-alpine AS runner
WORKDIR /app
ENV NODE_ENV production
# Uncomment the following line in case you want to disable telemetry during runtime.
# ENV NEXT_TELEMETRY_DISABLED 1
RUN addgroup --system --gid 1001 nodejs
RUN adduser --system --uid 1001 nextjs
COPY --from=builder /app/public ./public
# Automatically leverage output traces to reduce image size
# https://nextjs.org/docs/advanced-features/output-file-tracing
COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./
COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static
USER nextjs
EXPOSE 3000
ENV PORT 3000
CMD ["node", "server.js"]
Tried many things myself but nothing works and on the internet I keep coming across that you have to run on a NextJS server, but if I'm correct, that is what you do when running it standalone.
Got into the issue more and found some more errors. Found out that the next/image component's default loader uses squoosh because it is quick to install and suitable for a development environment. But for a production environment using output: "standalone", you must install sharp.
So I runned
yarn add sharp
And that error was not happening anymore. But still I didn't get an update on my content. Because I could not get this working and I need to continue with the project I checked out On-Demand revalidation. Read more about it here:
https://nextjs.org/docs/basic-features/data-fetching/incremental-static-regeneration
So what I now did was:
create an API end-point in my NextJS solution to revalidate.
use the webhook from my CMS to call that api end-point
That did the trick for me. I could now just update something in my CMS, the webhook was triggered and then I got a new fresh build page with the changes!
In my eyes this is a beter solution because:
It only revalidate the page when it is really needed
After the change/revalidation the next visitor directly see's the new updated version
So i'm happy that it didn't worked and with an bit of extra efford I got a way better solution!
Hope this helps somebody.
I have several ASP.NET Core (6.0) WebApi projects that are dockerized using docker-compose. For local development, I use a docker-compose file which references Dockerfiles that build / publish the projects in Debug mode. Then in order to debug, I use the 'Docker .NET Core Attach (Preview)' launch configuration and select the corresponding docker container, which then prompts me about copying the .NET Core debugger into the container.
Until recently, this always worked and I could debug inside the container. Now suddenly, after being prompted and trying to copy the debugger into the container, I always get the following error:
Starting: "docker" exec -i web_roomservice /remote_debugger/vsdbg
--interpreter=vscode
Error from pipe program 'docker': FATAL ERROR: Failed to initialize dispatcher with error 80131534
The pipe program 'docker' exited unexpectedly with code 255.
I tried re-installing the Docker Engine + docker-compose (with the latest version), re-installing VS Code + the 'Docker' and 'C#' extensions, migrating from ASP.NET Core 5.0 to 6.0 (since 5.0 is not supported anymore) and obviously rebuilding my images multiple times, but nothing seems to work and I can't find anything online. Any help with this would be greatly appreciated, since as of now I can't debug which sucks.
These are my docker-compose, Debug-Dockerfile and launch config (for one project / service):
version: "3.7"
services:
roomservice:
image: web_roomservice
container_name: web_roomservice
build:
context: ./
dockerfile: Dockerfile.RoomService.Debug
expose:
- "5011"
volumes:
- /etc/localtime:/etc/localtime:ro
environment:
- ASPNETCORE_ENVIRONMENT=Development
user: "root:root"
logging:
driver: "json-file"
options:
max-size: "5m"
(There's more but I only included the section with this one service)
FROM mcr.microsoft.com/dotnet/aspnet:6.0 AS base
WORKDIR /app
#EXPOSE 5011
ENV ASPNETCORE_URLS=http://+:5011
# Install netpbm which is used for .pgm to .png file conversion for map images
RUN apt-get -y update --silent
RUN apt-get -y install netpbm --silent
# Creates a non-root user with an explicit UID and adds permission to access the /app folder
# For more info, please refer to https://aka.ms/vscode-docker-dotnet-configure-containers
RUN adduser -u 5678 --disabled-password --gecos "" appuser && chown -R appuser /app
USER appuser
FROM mcr.microsoft.com/dotnet/sdk:6.0 AS build
WORKDIR /src
COPY ["RoomService/RoomService.csproj", "./RoomService/"]
COPY ["EventBusRabbitMQ/EventBusRabbitMQ.csproj", "./EventBusRabbitMQ/"]
COPY ["Common/Common.csproj", "./Common/"]
RUN dotnet restore "RoomService/RoomService.csproj"
COPY RoomService ./RoomService
COPY EventBusRabbitMQ ./EventBusRabbitMQ
COPY Common ./Common
WORKDIR "/src/RoomService"
RUN dotnet build "RoomService.csproj" -c Debug -o /app/build
FROM build AS publish
RUN dotnet publish "RoomService.csproj" -c Debug -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "RoomService.dll"]
(This Dockerfile is placed in the workspace folder (parent of the actual RoomService project folder) in order to include the Common project)
{
"version": "0.2.0",
"configurations": [
{
"name": "Docker .NET Core Attach (Preview)",
"type": "docker",
"request": "attach",
"platform": "netCore",
"sourceFileMap": {
"/src/RoomService": "${workspaceFolder}"
}
}
]
}
(This launch configuration is placed in the actual RoomService project folder's .vscode subfolder)
Had the same problem today. Try deleting the ~/vsdbg directory, if you are on MacOS.
David is right. I was having the same issue. I am running WSL2 (Ubuntu) and when I deleted the ~/.vsdbg directory that fixed the issue for me.
This configuration below works perfectly in .NET6.
content of .vscode/launch.json file:
{
"version": "0.2.0",
"configurations": [
{
"name": ".NET Core Docker Attach",
"type": "coreclr",
"request": "attach",
"processId": "${command:pickRemoteProcess}",
"pipeTransport": {
"pipeCwd": "${workspaceRoot}",
"pipeProgram": "docker",
"pipeArgs": [ "exec", "-i", "your-container-name" ],
"debuggerPath": "/root/vsdbg/vsdbg",
"quoteArgs": false
},
"sourceFileMap": {
"path/in/container": "${workspaceRoot}/local/path"
}
}
]
}
I have also installed dotnet debugger in my Dockerfile as follow:
FROM mcr.microsoft.com/dotnet/sdk:6.0 AS base
...
FROM base AS debug
RUN apt-get update
RUN apt-get install -y procps
RUN apt-get install -y unzip
RUN curl -sSL https://aka.ms/getvsdbgsh | /bin/sh /dev/stdin -v latest -l ~/vsdbg
...
Full instruction is covered in this YouTube tutorial: Debugging .NET Core in Docker with VSCode
My filesystem:
Dockerfile
entrypoint.sh
package.json
/shared_volume/
Dockerfile
FROM node:8
# Create and define the node_modules's cache directory.
RUN mkdir /usr/src/cache
WORKDIR /usr/src/cache
COPY . .
RUN npm install
# Create and define the application's working directory.
RUN mkdir /usr/src/app
WORKDIR /usr/src/app
# entrypoint to copy the node_modules and root files into /usr/src/app, to be shared with my local volume.
ENTRYPOINT ["/usr/src/cache/entrypoint.sh"]
package.json
{
"name": "test1",
"version": "1.0.0",
"description": "",
"main": "index.js",
"scripts": {
"start": "echo hello world start"
},
"keywords": [],
"author": "",
"license": "ISC",
"devDependencies": {
"rimraf": "^3.0.1"
}
}
entrypoint.sh
#!/bin/bash
cp -r /usr/src/cache/. /usr/src/app/.
command line - bash script
If I run this code (note: using windows 10 with cmder, hence %cd% not pwd):
docker run -it --rm -v %cd%/shared_volume:/app --privileged shared-volume-example bash
Error
standard_init_linux.go:211: exec user process caused "no such file or directory"
If I take out the reference to entrypoint, then the code works, so what is going on with the entrypoint?
Any suggestions.
thanks
ok someone else posted an answer regarding line endings. I used atom line-ending-selector and set it to LF in replacement, and resaved the file and now it works.
I'm writing a simple app in GO and using postges I have this folder structure
|--- Dockerfile
|--- api.go
|--- vendor/
database/
init.go
and here is my dockerfile
FROM golang:1.9
ARG app_env
ENV APP_ENV $app_env
COPY . .
WORKDIR /project
RUN go get ./vendor/database
RUN go get ./
RUN go build
CMD if [ ${APP_ENV} = production ]; \
then \
api; \
else \
api; \
fi
EXPOSE 8080
when I working docker-compose up I m getting this error:
Error Message
Step 6/10 : RUN go get ./vendor/database
---> Running in 459740ba584c
can't load package: package ./vendor/database: cannot find package "./vendor/database" in:
/project/vendor/database
Service 'api' failed to build: The command '/bin/sh -c go get ./vendor/database' returned a non-zero code: 1
Where am I going wrong with the project structure?
You are copying the source to default directory of the base image with the command COPY . .. Then you are making /project as working directory with WORKDIR /project. So when you run RUN go get ./vendor/database, the command is actually run in /project/vendor/database which does not exist. Switch the order of COPY and WORKDIR as follows
WORKDIR /project
COPY . .
I am using semantic-ui which requires semantic.json located in root folder and set setting autoInstall: true when using Dockerfile.
In case I want to use custom theme, I need to rebuild semantic-ui which gulp requires semantic.json located in root-fooder/semantic/. That means, before building the image, the semantic.json should be located in semantic folder, copy to root folder then, after npn install and ng server, it should remove semantic.json in root folder to let gulp can run.
FROM node:8.11.3
WORKDIR /app
# add `/usr/src/app/node_modules/.bin` to $PATH
ENV PATH /app/node_modules/.bin:$PATH
# install and cache app dependencies
COPY package.json /app
COPY package-lock.json /app
COPY semantic.json /app
RUN npm install -g npm#latest \
&& npm install -g n \
&& npm install -g #angular/cli \
&& npm install -g gulp \
&& npm install gulp \
&& npm install
# add app
COPY . /app
EXPOSE 4200
# start app
CMD ng serve --port 4200 --host 0.0.0.0
My question is how can I use Dockerfile to copy semantic.json file from semantic folder to root folder in host and build it, then remove it in root folder?
Instead of finding solution work-around, I update semantic.json to point base path to root folder and remaining paths will go to semantic as prefix as followings
"base": "",
"paths": {
"source": {
"config": "semantic/src/theme.config",
"definitions": "semantic/src/definitions/",
"site": "semantic/src/site/",
"themes": "semantic/src/themes/"
},
"output": {
"packaged": "semantic/dist/",
"uncompressed": "semantic/dist/components/",
"compressed": "semantic/dist/components/",
"themes": "semantic/dist/themes/"
},
"clean": "semantic/dist/"
},
"permission": false,
"autoInstall": true,
"rtl": false,
"version": "2.3.3"