I'm a Docker newbie, using Docker on Mac. I'd like to debug it because I can't understand why it's doing what it's doing.
I figured I'll just find the Python files (in my case the file network.py in compose,) edit them to run my debugging logic, and then run docker.
The file I found and edited is at /usr/local/Cellar/docker-compose/1.24.1/libexec/lib/python3.7/site-packages/compose/network.py but even though I edited it, Docker doesn't seem affected, I'm not sure if it even uses that file. I tried entering 1 / 0 into that file and Docker still works, so I conclude it's not even using that file.
Where are Docker's actual Python files that it uses?
Related
I currently have a folder containing some .dll files, .bin files and some .exe files. The main .exe that I will be executing only works on windows, and I am not entirely sure what are all its dependencies. My goal is to package all the files in the folder into a docker container so I can integrate it into the rest of my pipeline. The main .exe is a command line tool which is only called once with some arguments and left to run.
I have tried to use windows server core as the container image and it works. However this image is too big for my needs. I have tried to use nano server but when I try to run the executable there is nothing printed in the command line and the program does not run. In that scenario, if I type:
C:\Bin\x64>echo %ERRORLEVEL%
I get the following output:
-1073741515
Meaning I am missing some dependencies.
So, I'm wondering if there is an alternate solution to packaging this folder since windows server core is too big.
Most likely you'll have to stick with the Server Core image. The main thing is that these images server different purposes, and the Nano Server is intended to new applications being developed with the targeted Nano Server API. Server Core is the image focused on existing apps, but its APIs make the image larger than what one would expect from a container.
Keep in mind that it's still better than a full VM. :)
I blogged about this here: https://techcommunity.microsoft.com/t5/containers/nano-server-x-server-core-x-server-which-base-image-is-the-right/ba-p/2835785
I am a developer who is using Ubuntu 20.04 LTS regularly for my development. I never install any packages like, node, PHP, python in the OS and make use of docker for the purpose. VS Code is the editor I use, and the extension of the remote container will help me to develop & debug inside the docker container.
Right now, I am in the process of moving the development to a windows environment and I wanted to follow a similar workflow there too. Unfortunately, I am facing few issues like "file changes are not getting detected" (when npm serve in angular and react projects).
https://github.com/microsoft/WSL/issues/4739
https://www.reddit.com/r/bashonubuntuonwindows/comments/c48yej/wsl_2_react_not_reloading_with_file_changes/
I have tried different methods to solve the issue like
use wsl2 and then docker inside that and then serve from the container
use just docker and serve the code from inside the container
Regardless of the methods, the file changes are not getting detected inside the docker.
Trust me I have gone through many bizarre words like inotify, increasing the watchers, etc... Nothing helped.
Is there a developer out there following a similar practice in a Windows environment? (docker + windows)
Any help is highly appreciated.
I suggest moving the files to the wsl2 file system and not the windows.
Wsl2 'sees' the windows file system from inside a mount image /mnt/c .
Move out of it, like at ~ (cd ~) and i think your files will be normally watched .
I have a bootable iso image (live cd) with Linux system that is pretty old. That distro doesn't have remote repo (all installations are done from cdrom and separate disk with packages). I wanted to turn it into a docker image. Reading through articles google gave me, I've found several ways to do that. The first one is to mount the iso and find filesystem.squashfs - only modern distros use that way, not my case. My distro doesn't have that file available. The second approach is to call debootstrap but it requires to specify the repo for the distro with dist directory available in it. My distro doesn't have a public repo. What can I do? Is it even possible? I think that should be possible by doing a lot of things manually but how?
I faced similar problems when I had to containerize an old build server (building natively for legacy systems), eventually I succeeded. This approach describes how to containerize some old Linux distro (kernel 2.6.27 in my case), in the present Linux kernel 5 era.
General steps
if necessary: boot the old OS (or Live CD image)
login to the old system as root (or use sudo)
create a tarball from the relevant folders present in root
cd / ; tar cfvz image.tar.gz --one-file-system --exclude=/var/log --exclude=/image.tar.gz /
the selection worked in my case; review for yourself which folders to include or exclude
transfer the tarball to the Docker host (step not shown here)
and import it:
docker import image.tar.gz
the previous command will print out some hash
if convenient, tag the imported image:
docker tag <import-hash> <your-label>
Legacy problem: unsupported system calls
The imported image contains a Linux distribution snapshot. Some binaries can be executed from Docker, eg.:
docker run --rm <your-label> bin/ls
may actually work.
Some important binaries initially did not work for me, most notably bash:
docker run -it --rm <your-label> bin/bash
was failing silently. (Also, running with strace was possible but gave no clear indication.)
As #hiranchaudhuri pointed out, this is likely due to an API discrepancy between the host's kernel and the container's user space code.
In my case the problem was solved by enabling the legacy vsyscall kernel API
for Windows WSL2, this is described here https://learn.microsoft.com/en-us/windows/wsl/wsl-config
for native Linux systems of today, I guess this can be set in the boot configuration, with the kernel command-line parameter vsyscall=emulate, if the present kernel supports this option
I seriously doubt you will succeed on that.
Be aware Docker is not a full virtualization like KVM or VirtualBox. The lightweight virtualization benefits from the docker containers running on the host's Linux kernel. Which means the kernel is the same inside and outside of the container.
If you now try to install some old distro inside the container you may end up with an incompatible combination. Patching the kernel may involve upgrading glibc, and patching that may involve recompiling the rest of the OS.
I am not sure why you want to stick to the old distro, but seriously I believe you are better off with real virtualization.
This is my first foyer into .Net Core and App Engines, so please forgive me if I sound uninformed.
We have a .Net Core Application that we're trying to get published to a GCP App engine (obviously). when I run dotnet publish -c Release it builds just fine without any errors. When I test the program locally it runs just fine and I'm able to access it. However whenever I try to get it on GCP I get the following error:
Updating service [default] (this may take several minutes)...
.................................................................................................................................................failed.
ERROR: (gcloud.app.deploy) Error Response: [9]
Application startup error:
Error:
An assembly specified in the application dependencies manifest (ApplicationName.deps.json) was not found:
package: 'Microsoft.AspNetCore.Mvc.Abstractions', version: '2.0.2'
path: 'lib/netstandard2.0/Microsoft.AspNetCore.Mvc.Abstractions.dll'
This assembly was expected to be in the local runtime store as the application was published using the following target manifest files:
aspnetcore-store-2.0.5.xml
Failed to deploy project WebApiDotNetCore to App Engine Flex.
We tried removing it from the dependencies JSON, and that just ended up breaking everything, so it is indeed required. It is installed in the project via nuget, so it should be included with dotnet restore. I've looked around and some sources seem to think that it's the installation of the dotnet core sdk, but I've tried it on three computers and always get the same thing.
Lastly, I should say this happens when I try to deploy through command line as well as directly through Visual Studio with the GCP SDK.
Has anyone experienced this error, or something similar? Any advice or guidance is very much appreciated.
Thanks!
-BT
OP REVISION
As an update I was able to get this resolved aside from the fact that I get a 502 error when I try to load the application. Here are the steps I took for anyone else that is looking what to do:
Pre-reqs: Docker for Windows and Google Cloud SDK installed and running. Running turned out to be a pain with Docker for Windows. Many many restarts and reinstallations.
Open the solution and ensure that the startup project is set correctly.
Right click the startup Project, and select Add > Docker Support.
Select Linux in the popup window and allow the files to be created.
When complete, the Dockerfile should appear in the preview window. Do the following:
For me the first line read: FROM microsoft/aspnetcore:2.0 AS base. Change this to FROM microsoft/aspnetcore-build:2.0 AS base.
Additionally, check to make sure that the last line has the correct .dll name. Docker for Windows will put whatever the project name is rather than the class name, so for me my final .dll names were different than the project name.
Lastly, if your project has any dependencies that are required to run but not to build, then you'll need to manually add them. For me we have a couple of XML files that needed to be put in the app folder, so I had to add COPY *.xml /app/ and put those files in the same folder as the solution file is in.
If there's anything else you need to do to the Dockerfile I highly recommend this page. It's a how-to on all Dockerfile commands written in ENGLISH! (that was my biggest problem with all of this - I have little experience with Linux and even less with Docker and everything was written in Greek for me).
Create an app.yaml file. I just used the standard:
runtime: custom
env: flex
Copy the Dockerfile found in the startup project's folder into the folder with the solution.
Initialize gcloud to the right project, then navigate to the solution folder. The type gcloud app deploy app.yaml, and follow the onscreen guide.
for me it takes about 15 minutes to deploy the GCP, so depending on the complexity of your project it may take longer, though this one is rather complex.
Now I'm trying to figure out my 502 error... I've tried what seems like everything - changing the listening port in the application, exposing the listening port on the dockerfile, trying to get GCP to open that port, and trying half a dozen different ports. It's slow-going since it's such a chore to deploy each time.
Hope this helps anyone that was like me a couple weeks ago and had never even heard of Docker!
Which version of .NET Core is this? Also, have you tried to run in Cloud Shell? Maybe that will provide more clues on what might be wrong.
It looks that you don't have the Microsoft.AspNetCore.Mvc.Abstractions library installed in your system. Using the .NET CLI, type the following command:
dotnet add package Microsoft.AspNetCore.Mvc.Abstractions --version 2.0.2
After that, to ensure the library is included, run the following:
dotnet restore
dotnet build
Try running it locally (it should work), and then use the dotnet publish -c Release command again.
I have searched the history a little bit but failed to find a good answer. So I just asked my question here. If there is a good answer already, please redirect it for me. Thanks.
The question is, I found my company's new hire doc lists a bunch of software to install to setup the development environment. Usually it took 1 or 2 days for a new hire to setup everything ready for a new mac. We want to shorten that process. The first thing I thought is Docker.
I read through the user guide of Docker and followed some blogs regarding to how to setup dev environment using Docker but still a little confused if Docker applies to our setting. So here's the detail of requirements:
We need to install a bunch of software (many of them are customized binaries). Right now, we distribute the source code, a new hire need to build from the source code, install it and set environment to include the binary into path. I am wondering if Docker allows us to install customized binaries into it's container?
The source code should not stay in the container. The source code is still checked out in one's local machine using git. Then, how can I rely on the Docker container's environment to build my software? I have searched a little bit is that, you need to mount your folder into the container, and then shell into your container to build? Is that how it works?
We usually develop in mac, does Docker also support mac container or it just allows you to run Linux container using boot2Docker?
Thank you so much in advance for your help.
Some answers :)
First, I think it's a really good idea to use Docker to standardise the development configuration (softwares, custom packages, env variables, ...).
With Docker, you can get your customised binaries from the host, it's not a problem. With the CMD command, you can use bash to install them and add them into your PATH. You can also write a shell script to install all your stuff and launch this script when you build your container
Your code will be on the host and you can "mount" a host folder in your docker image with the -v command. Ex: docker run -v /home/user/code:/tmp/code your_image. I'll detail below how the developer will use your Docker image.
Yep, you have to use Boot2Docker, it works well
Once your development image will be ready, you have to publish it on the official Docker registry (or to host a local registry on your network).
Next, the developer will launch the following Docker command:
docker run -rm -ti your_build_image /bin/bash
This will launch a bash terminal in your Docker image and the developer will be able to compile the code. Ex: cd /tmp/code + mvn clean install
Please have a look to this article to learn about volumes: http://jam.sg/blog/mongodb-docker-part-2/
And this one about Dockerfile: https://www.digitalocean.com/community/tutorials/docker-explained-using-dockerfiles-to-automate-building-of-images
You can also find a lot of Dockerfiles on github (search Dockerfile).
If the goal is to speed up the time it takes to get a Mac setup and usable in your environment, you might want to look at Boxen.
From the "About" section:
"Boxen is your team's IT robot. It's a dangerously opinionated framework that automates every piece of your development environment. GitHub, Inc. wrote the first version of Boxen (imaginatively called “The Setup”) to help employees start shipping on day one."