We need to have access to a shared resource in the active directory (file share) from a docker container which is running a dotnetcore api. The Api is started with (dotnet api.dll).
The current user in the docker contrainer is ContainerAdministrator.
We are using Windows 2016 server
Finally made it work.
Added a powershell script that run "net use" first and the dotnet command after.
Use cmd in the docker file to start the ps script.
Thanks for the answers
You need to setup Windows based authentication infrastructure first. It's not simple DockerFile change. Details are here in series of blog articles both for .NET core and full .NET framework.
https://artisticcheese.wordpress.com/2017/09/09/enabling-integrated-windows-authentication-in-windows-docker-container/
Related
for the development of my Python project I have setup a Remote Development Container. The project uses, for example, MariaDB and RabbitMQ. Until recently I built and started containers for those services outside of VSCode. A few days ago, I reworked the project using Docker Compose so that I can manage the other containers using the remarkable Docker extension (Id: ms-azuretools.vscode-docker, Version: 1.22.0). That is working fine, besides one issue I cannot figure out:
I can start all containers using compose up, however, the Python project Remote Development Container is not staying up. Currently, I open the project folder in a second VSCode window and use the "Reopen in Container" command.
However, it would be nice if the Python project container is staying up and I could just use the "Attach Visual Studio Code" command from the Docker extension Containers menu.
I am wondering, if there is something I can add to the .devcontainer.json or some other configuration file to realize this scenario?
Any help is much appreciated!
If it helps I can post the docker-compose.yml, Dockerfile's or the .devcontainer.json, please let me know what is required.
I just started writing docker files and am trying to start a website using docker but every time I run the file I cant access the website.
dockerfile
dockerlog
This is just a warning that the method is deprecated and does not affect the running of the application, it is not the cause of the problem.
I actually figured it out when running the docker run command adding the
--network="host" flag let me connect to the website
i am currently trying to setup the Github - actions and use windows VM as build server (self hosted runner). i have installed docker on windows.
i am able to connect to our harbor regisrty from windows VM
example screenshot from powershell : Able to successfully run the docker command in windows VM
but, when i try to execute the same command from github actions, i am getting access denied error
please refer the following screenshot from github-actions :
Access denied error when running docker command from github actions
can someone help me here?
I had to modify the startup params of the Docker Engine service to point to the User group installed with the Github actions runner. I also had to make this modification using the registry editor as the GUI for the services would not accept a parameter change, more on this here: https://serverfault.com/questions/507561/in-a-windows-service-will-the-start-parameters-be-preserved-if-the-start-is-of
Example: "C:\Program Files\Docker\dockerd.exe" --run-service -G GITHUB_ActionsRunner_XXXXXX"
If I am creating a docker image for one of my applications and publishing it in docker hub.
This image was downloaded by many users and ran that application in their containers and that generated application logs in a folder.
Now as a developer how can I see those application logs from my machine when that container is in remote computer for which I dont have access?
If it is a virtual machine, I can do ssh to that same machine and go to that folder anse see the logs for that particular application, so how it is possible with docker?
I am not talking about docker event logs, the logs generated by my python application with the logging module. Could you please help me on how to handle this case in dockers.
I don't have any experience with working on dockers.
docker exec can be used to run bash commands in a docker container. But in your case the containers are running in a remote machine and not in your local machine. So, in that case, you have 2 options.
1. ssh into the remote machine and then use docker exec command to check the logs.
2. Directly ssh to the docker container.
But, in both scenarios, you will need SSH access to the remote machines from the end users.
I hope this helps.
If your application writes log files to the container filesystem, this is one of a couple of good uses for Docker bind mounts. If the operator (the person running the container; not you, the original software author) starts the container with
docker run -v $PWD/logs:/app/logs ... you/yourimage
then they will be able to read the log files directly on their host system.
As the original application developer, you have no access to these logs. This is the same as every other (non-SaaS) application: the end user installs software on their system and runs it, but it's on a system you can't log into, so you can't directly see things like log files. The techniques for dealing with this are the same as anything else: when a user files a bug report make sure they provide a sufficient reproduction, log files, and relevant configuration, and reproduce the issue yourself locally.
On my windows server 2016, I am trying to figure out the run command syntax to run a docker image as a user in my ldap. I read this article, but I am not following it very well (different environments)
Perhaps I am miss understanding the concept all together, but in the end I need to run the container as a specific user in our active directory.
Any links to a well documented run --user examples would be appreciated...
One of the things that is confusing is trying to figure out the UserId and such...
The answer depends on the use case, but may be gMSA authentication would help? Basically, with gMSA authentication, you can add the host OS to an AD domain, and containers running on it can share the privileges to use things like network drive. That way, you don't need to pass credential every time you access them.
MS team has a good write up on it here:
Active Directory Service Accounts for Windows Containers
https://learn.microsoft.com/en-us/virtualization/windowscontainers/manage-containers/manage-serviceaccounts
Also, artisticcheese has fantastic walk through.
Enabling integrated Windows Authentication in windows docker container
https://artisticcheese.wordpress.com/2017/09/09/enabling-integrated-windows-authentication-in-windows-docker-container/
Hope this helps.