Continues integration for docker on TFS - docker

I need some help with docker configuration on TFS. I've got some experience but not much. I have configured classic CI for .NET projects few times where these steps were used:
-get sources
-run tests
-build
-copy files to server
Recently I've started to use Docker and I would like to automate this process, because now I have to copy files manually to remote machine and then run these commands:
dotnet publish --configuration=Release -o pub
docker build . -t netcore-rest
docker run -e Version=1.1 -p 9000:80 netcore-rest
I saw few tutorials for VSTS, but I'd like to configure it for classic TFS.
I don't have docker hub. What interesting me is:
-how to kill/remove currently running container
-build new one from copied files
-run a new container
Thank you
PS. I have already installed docker on my agent machine, so it's able to build an image.

Related

How to run Servicemix commands inside docker in Karaf console using deploy script?

Colleagues, it so happened that I am now using the servicemix 7.0 technology in one old project. There are several commands that I run manually.
build image servicemix
docker build --no-cache -t servicemix-http:latest .
start container and copy data from local folder source and folder .m
docker run --rm -it -v %cd%:/source -v %USERPROFILE%/.m2:/root/.m2 servicemix-http
start console servicemix and run command
feature:repo-add file:/../source/source/transport/feature/target/http-feature.xml
run command
feature:install http-feature
copy deploy files from local folder to deploy folder into servicemix
docker cp /configs/. :/deploy
update image servicemix
docker commit servicemix-http
Now I describe the gitlab-ci.yml for deployment.
And the question concerns the servicemix commands that were launched from the karaf console.
feature:repo-add
feature:install
Is there any way to script them?
If all you need is to install features from specific feature repositories on startup you can add the features and feature-repositories to /etc/org.apache.karaf.features.cfg.
You can use ssh with private-key to pass commands to Apache karaf.
ssh karaf#localhost -p 8101 -i ~/.ssh/karaf_private_key -- bundle:list
You can also try running commands through Karaf_home/bin/client, albeit had no luck using it with Windows 10 ServiceMix 7.0.1 when I tried. It kept throwing me NullReferenceException so guessing my java installation is missing some security related configuration.
Works well when running newer Karaf installations on docker though and can be used through docker exec as well.
# Using password - (doesn't seem to be an option for servicemix)
docker exec -it karaf /opt/apache-karaf/bin/client -u karaf -p karaf -- bundle:list
# Using private-key
docker exec -it karaf /opt/apache-karaf/bin/client -u karaf -k /keys/karaf_key -- bundle:list

How create a freestyle jenkins job which is running in docker

I have a simple c++ to compile and run using jenkins freestyle job it is working fine if I use jenkins built in my OS (Linux mint tricia) commands I use:
cd "destination"
g++ main.cpp -o test
./test
everything is working good.
BUT,now I m running jenkins from docker container and when I try to do this I get error cant cd to "destination", I know this is because docker is isolated from host machine, So I want to ask how could I make a simple freestyle job which executes programs which is on my host machine using jenkins which is running inside docker?
Thank You
Just run your jenknis container with volume like docker run -it -v destination:destination jenkins

install Jenkins & its plugins in Docker,then save a new image to be used in the other offline PC

Installl Jenkins and its plugins in an offline pc is difficult.
can I install Jenkins in Docked, intall all the plugins needed in a PC, and then save this new image and copy it to the other PC which is offline?
One option is to mound local directory wit Jenkins and install plugins.
docker run -it --rm -v $PWD/:/var/jenkins_home -p 8081:8080 jenkins/jenkins
Once you mount host directory, then install required plugging, create Dockerfile like below
FROM jenkins/jenkins
COPY plugins /var/jenkins_home/plugins/
Then build this Dockerfile,
docker build -t my_custom_jenkins .
Then you can share this image with others and it will contain all plugins.
If you need full configuration then use below option
FROM jenkins/jenkins
COPY . /var/jenkins_home/plugins/

Storybook changes reload in Docker on Ubuntu, but not Windows Docker Desktop

I have a simple Dockerfile we use solely while developing a React component library that uses Storybook. The configuration simply pulls from node:latest and mounts our project.
Dockerfile
FROM node:latest
EXPOSE 6006
WORKDIR /usr/src/app
COPY . .
RUN npm install
CMD [ "bash" ]
Building and Running
docker build -t <our name> .
docker run --rm -it -p 6006:6006 -v $(pwd):/usr/src/app <our name>
# Inside interactive container
npm run storybook
package.json
{
"scripts": {
"storybook": "start-storybook -p 6006"
}
}
At work, we use Ubuntu and this setup worked as expected.
However while using:
Windows 10 Pro
Git Bash for Windows
Docker Desktop
it seems that no changes to story files are observed. File saves do not trigger any activity in the console, nor in the browser.
Why could this be the case? Is there a problem with our Docker setup that we're missing?
This actually stems from a known problem #56.
Inotify does not work on Docker for Windows
The Docker For Windows Documentation recommends using a poling solution:
Currently, inotify does not work on Docker Desktop for Windows. This becomes evident, for example, when an application needs to read/write to a container across a mounted drive. Instead of relying on filesystem inotify, we recommend using polling features for your framework or programming language.
Example Poling Solution (docker-windows-volume-watcher)
One poling solution is docker-windows-volume-watcher.
Running the below from another terminal in the project directory solves the issue:
docker-volume-watcher <name of running container> $(pwd)

How does the Jenkins CloudBees Docker Build Plugin set its Shell Path

I'm working with a Jenkins install I've inherited. This install has the CloudBees Docker Custom Build Environment Plugin installed. We think this plugin gives us a nifty Build inside a Docker container checkbox in our build configuration. When we configure jobs with this option, it looks like (based on Jenkins console output) Jenkins runs the build with the following command
Docker container 548ab230991a452af639deed5eccb304026105804b38194d98250583bd7bb83q started to host the build
[WIP-Tests] $ docker exec -t 548ab230991a452af639deed5eccb304026105804b38194d98250583bd7bb83q /bin/sh -xe /tmp/hudson7939624860798651072.sh
However -- we've found that this runs /bin/sh with a very limited set of environmental variables -- including a $PATH that doesn't include /bin! So
How does the CloudBees Docker Custom Build Environment Plugin setup its /bin/sh environment. Is this user configurable via the UI (if so, where?)
It looks like Jenkins is using docker exec -- which i think means that it must have (invisibly) setup a container with a long running process using docker run. Doesn't anyone know how the CloudBees Docker Custom Build Environment Plugin plugin invokes docker run, and if this is user manageable?
Considering this plugin is "up for adoption", I would recommend the official JENKINS/Docker Pipeline Plugin.
It source code show very few recent commits.
But don't forget any container has a default entrypoint set to /bin/sh
ENTRYPOINT ["/bin/sh", "-c"]
Then:
The docker container is ran after SCM has been checked-out into a slave workspace, then all later build commands are executed within the container thanks to docker exec introduced in Docker 1.3
That means the image you will be pulling or building/running must have a shell, to allow the docker exec to execute a command in it.

Resources