On my windows server 2022, I recently installed Teamcity Professional 2022.10 (build 116751) using the windows installer, and once I got it up and running I an agent through 'Install Agent' in the GUI, again using the windows installer.
I then created my first project, which I managed to do a successful build for, also running my tests. The next step was creating a docker image from this build, and pushing it to my repository. Here however, I am facing issues: my installed agent is not compatible for that build, giving me the following incompatibility error:
Incompatible runner: Docker
Unmet requirements:
docker.version exists
docker.server.version exists
While it's clear to me that something is going wrong with the docker version, I'm not sure what exactly, or how/where to fix it. Since both the agent and the teamcity installation are running as windows services (Windows server 2022), I'm not sure if the docker version has to be installed in something running in the agent service, or simply on my windows server installation. The latter is the case: running docker info shows that it is installed.
I have tried to somehow connect to my agent, to try and install docker there, using its hostname through RDP, which does promt me for a username and password, but I have no idea which combination to use there. I have tried using the credentials of my account running the process, but none of the credentials seem to work. Nowhere in the installer did I have to pick any credentials, so I am not sure how to connect to the agent at all, or if I even can/must connect to it to install docker on it.
I found also some logging on the agent:
[2022-11-05 17:07:49,729] INFO - rains.buildServer.AGENT.DOCKER - Failed to parse version: Docker version master-dockerproject-2022-03-26, build dd7397342a
[2022-11-05 17:07:49,729] INFO - rains.buildServer.AGENT.DOCKER - Docker client is not available. Check whether it has been installed and PATH environment variable contains path to it.
[2022-11-05 17:07:49,777] INFO - Server.powershell.agent.DETECT - Found through registry: PowerShell Desktop Edition v5.1.20348.1 64-bit(C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe)
[2022-11-05 17:07:49,778] INFO - Server.powershell.agent.DETECT - Detecting PowerShell using CommandLinePowerShellDetector
[2022-11-05 17:07:50,125] INFO - rains.buildServer.AGENT.DOCKER - Docker-compose is not available. Check whether it has been installed and PATH environment variable contains path to it.
In the parameters of my agent I can find the path parameter, which includes 'C:\Program Files\Docker;' which makes me think it is indeed the docker installation of my windows server that matters, but I then fail to see what is going wrong exactly.
Since the agent was installed as a service, it uses the docker version of my windows server installation. I wanted to do a reinstall of docker to see what was going wrong, and I noticed that I couldn't uninstall it through for example control panel, windows seemed to have no idea that it was installed, even though docker info would specify both a client and a server that were running.
After hunting down all the 'hidden' docker files of the installation and reinstalling it on the host machine, these warnings went away.
I'm still not sure if it's possible to connect to the build agent though, but since it seems to use the resources on the host machine, that seems to not be necessary anyhow.
Related
my environment :
MacOS M1 chip
VSCode version 1.66.2 arm64
local installed docker version : 20.10.22
I have situations that docker is not working in VSCode.
I already installed docker in local. But when I'm trying to connect docker in VSCode, repeatedly asking install docker extensions. (but I do have docker already ). and if I do reinstall with following the VSCode, the docker version was broken (changed to intel chip docker).
Does anybody know what's wrong?
Docker Extensions for VS Code have nothing to do with the Docker engine itself. They are like an additional layer of tools and commands over the installed Docker. E.g. they provide IntelliSense for editing Docker-related files, you can run Docker commands from F1 drop-down, etc. But you should be able to do all the required tasks even without Docker Extensions, e.g. from the Terminal in VS Code, but for this the path to Docker CLI (command line interface) should be added to PATH environment variable.
If you are getting failed to connect error then maybe Docker engine is not running. Please refer to https://docs.docker.com/desktop/install/mac-install/ and https://docs.docker.com/desktop/troubleshoot/overview/ about how to check if the engine is running and how to troubleshoot the issues.
If that doesn't help, please provide some specific error and steps, which led to it, then we'll try to find out.
I set up a Windows GitLab runner that's supposed to download a Docker image from our Container Registry and then run a build script in the pipeline. Unfortunately the Docker container never launches due to the following error:
Running with gitlab-runner 15.1.0 (76984217)
on WindowsDockerRunner wZMWQZYi
Resolving secrets
Preparing the "docker-windows" executor
Using Docker executor with image mcr.microsoft.com/windows/servercore:ltsc2019 ...
Pulling docker image mcr.microsoft.com/windows/servercore:ltsc2019 ...
Using docker image sha256:e6b07227af5ca9303c2112b574f6f27f38135bbf9df29d829142410221967401 for mcr.microsoft.com/windows/servercore:ltsc2019 with digest mcr.microsoft.com/windows/servercore#sha256:26c6c296a4737ba478fe3c3e531b098f89b5562c40b416ba6fb8177ac462d1af ...
Preparing environment
Running on RUNNER-WZMWQZYI via
runner2...
ERROR: Job failed (system failure): prepare environment: Error response from daemon: invalid condition: "not-running". Check https://docs.gitlab.com/runner/shells/index.html#shell-profile-loading for more information
The error message doesn't clearly state what the cause of the problem is and the documentation that it references doesn't mention anything about "condition". Based on the link pointing to shell profiles I suspect it might have something to do with the shell that's being run, but when I run the Docker container locally it boots into PowerShell just fine.
Does anyone know how to solve this?
I came across this issue after installing Docker Engine using the Windows Server install script, which fetches docker.exe and dockerd.exe from https://master.dockerproject.org, These builds were last updated in March 2022, I found gitlab-runner 14.9 and earlier work okay with this version (released prior to March 2022), but 14.10 does not (released 2022-04-19) nor do any newer versions.
Installing Docker Desktop resolves this as it provides the latest version. However using Docker Desktop introduces licensing issues. An alternative is to manually install Docker Engine / update the version downloaded by the Microsoft script.
Docker Engine builds are provided on the Moby GitHub project to download from https://download.docker.com/win/static/stable/x86_64/ downloading the lastest version from here and replacing the docker executables in C:\Windows\System32 fixes the problem, working with the latest gitlab-runner.
An alternative is to use the docker-engine chocolatey package (which incidentally I maintain) which provides installation scripting for the above stable builds:
choco install docker-engine
There is also an open issue with the Windows-Containers team to move off (out of date) nightlies: https://github.com/microsoft/Windows-Containers/issues/256 which would provide a stable docker build, through the Microsoft recommended installation method.
Was finally able to solve this issue. We had the Docker Engine installed on our GitLab Runner, but that doesn't seem to be sufficient for GitLab CI/CD. After installing Docker Desktop on the runner the issue disappeared and we were able to run the pipeline.
After some trial and error I got it up and running.
I have another server running the gitlab-runner and docker without any issues (no docker desktop installed, which is not allowed because of licensing stuff).
The server I'm trying to setup right now is a 'redundancy' build server.
So to find out what was my problem, I started switching things from one build server to the other. Currently, it appears that simply downgrading to the gitlab-runner V13.4.0 was enough.
I did reregister the runner, since gitlab stated that the V15.x.x version was using executor "unknown".
Not sure what is going on there, but at least I can continue building now.
I am trying to develop in a remote container.
I run VS Code on my local windows machine.
I have a linux machine which runs docker and a bunch of containers.
I have the "Remote - Containers" and "Remote - SSH" extensions installed in VS Code.
I can connect to my linux machine in VS Code and I can see the running containers.
I can right click on a container and choose "Attach Shell". This works fine:
When I right click on a container and choose "Attach Visual Studio Code" I get an error:
UPDATE
The above error was raised because (for some reason?) docker must be running locally on windows also even though we are fully on a remote machine. I've installed and run docker locally.
Now when I right click on a running container, I get a different error:
Of course the containers are running -- I see them.
How can I Attach Visual Studio Code to a running remote container successfully?
This may not be a real answer but it's too much for a comment.
I believe you have a local machine and docker on a remote server.
The first thing you have to do is to install docker on your local machine and configure it so that's its looking for the docker host on your remote server.
Then you can create a .devcontainer.json on your machine. If you have the extension installed, VSCode will offer you do open this as container environment. Since your docker host sits on remote, this will now happen on your server instead of your local machine.
When I did the setup, I followed amongst other things this guide. Especially the SSH-Agent was required to get a remote docker host working. https://code.visualstudio.com/docs/remote/containers-advanced#_a-basic-remote-example
Here is a example .devcontainer file of mine.
Now back to your initial question, I don't think you will be able to use the remote container extension on a container that wasn't started as dev container. This is because vscode will install a bunch of stuff in there when its first set up. Similar to the SSH Extension. I may be wrong on this so take it with a grain of salt.
It may also be worth noting that once you connect to your server via SSH and have then the regular docker extension (which is not the remote container extension) installed, on remote, you will see your docker images listed there. But that does not mean you will be able to connect like that from local to remote container. For that you need to configure a docker remote host.
I have also faced similar issue after doing some research I found the issue was with my installation.
But I faced this issue when I installed vs-code through snap in Ubuntu.
May be try uninstalling VS Code and reinstalling it.
It should work if Docker is installed properly.
I took an older macbook back in use. It previously had boot2docker installed when the native docker for mac didn't exist yet. That might be the root cause of my issue.
I've installed the new docker for mac but when I run docker-compose I've got the following error:
docker.errors.TLSParameterError: Path to a certificate and key files must be provided through the client_config param. TLS configurations should map the Docker CLI client configurations. See https://docs.docker.com/engine/articles/https/ for API details.
I don't want to install a docker machine with virtual box or anything. I just want to run it natively like a fresh docker for mac installation. All the solutions I've found so far require me to use a docker-machine.
Fixed it by unsetting all legacy docker machine environment variables so that it uses the correct docker commands
unset ${!DOCKER_*}
I've found the solution on the docker troubleshooting page over here.
I installed Jenkins' Gradle plugin and used the automatic restart option via the Jenkins web interface. Jenkins seemed to hang on the "restarting..." page, so I finally tried to manually restart the Jenkins service on the server (64-bit Debian 7) using service jenkins restart.
Now, Jenkins is no longer running at all (verified with ps -ef | grep -i [J]enkins and service jenkins status), and when I try service jenkins [re]start, I see an [ ok ] message but nothing else seems to happen. I've deleted /var/log/jenkins/jenkins.log, and each time I try a service start (or restart), the log file reappears, but it's blank (ls -lA shows that the file was recently made, but cat produces no output). I also tried rebooting the server, with no effect. I finally deleted the Gradle folders under /var/lib/jenkins/plugins, which also did not appear to make a difference.
How do I even begin to approach this problem? Should I just re-install Jenkins?
EDIT: System info:
> uname -a
Linux AUC-Workstation1 3.2.0-4-amd64 #1 SMP Debian 3.2.68-1+deb7u1 x86_64 GNU/Linux
According to dpkg -l, I'm using Debian's jenkins package, version 1.617.
EDIT 2: I'm actually using the jenkins package provided directly by Jenkins, as per the instructions here.
I just had a problem where multiple Jenkins plugins were breaking Jenkins startup (after an upgrade) and here is the procedure I followed to resolve the issue, which might work for other plugin startup issues.
I'm working on an Ubuntu server, but I expect that this would work for Debian if it's going to work at all - I encourage others to adjust the procedure:
logged into the server and switched to the jenkins user (sudo su jenkins in my case)
went to the main jenkins directory
renamed plugins to plugins.problems_YYYYMMDD
previously, I attempted to disable the plugins, but this did not work for me (system still would not start)
created an empty directory plugins
restarted jenkins (sudo service jenkins restart)
In my case, this started just fine
iteratively followed the following procedure to add plugins back in
copied 1 or more plugins from plugins.problems_YYYYMMDD/ to plugins/
restarted jenkins
went to the plugin center and installed updates as available
sometimes I needed to install updates in a particular order due to dependencies
evaluated results in 'Manage Old Data'
I think I'm facing some manual updates of the old data
Note: if you know which plugins are likely the problem, then it is easier to just disable or temporarily (re)move them rather than (re)moving all of the plugins!
I never did figure out the initial problem, but I did get Jenkins working again, sort of.
I uninstalled Jenkins (using apt-get purge) and then re-installed it. This time it failed to start because it needed Java 7, but I apparently only had Java 6 installed (this surprised me, because I thought I had previously configured Jenkins to use Java 7 on that machine). So I installed openjdk-7-jdk and openjdk-7-jre, set JAVA and JAVA_HOME appropriately in the Jenkins config file, and started the service again. This allowed Jenkins to start.