Drone.io do not trigger git push - docker

I am trying to add a dockerized drone.io to join my existing gitea (also in docker container)
Drone is working and see each of my repo. I enable drone on one of them called my-app for the test.
As drone need a file called .drone.yml, I created one & filled it with some basic code to use pipeline & start some tests
kind: pipeline
name: default
steps:
- name: test
image: maven:3-jdk-10
commands:
- mvn install
- mvn test
Finally I have push it but nothing seem to happen on drone
Here is how I stared my containers
docker run \
--volume=/var/run/docker.sock:/var/run/docker.sock \
--volume=data:/data \
--env=DRONE_GITEA_SERVER=https://... \
--env=DRONE_GIT_ALWAYS_AUTH=false \
--env=DRONE_RUNNER_CAPACITY=2 \
--env VIRTUAL_PORT=80 \
--env VIRTUAL_HOST=my.domain \
--env LETSENCRYPT_HOST="my.domain" \
--env LETSENCRYPT_EMAIL="me#email.com" \
--restart=always \
--detach=true \
--name=drone \
drone/drone:1
docker run --name git -v /home/leix/gitea:/data -e VIRTUAL_PORT=3000 -e VIRTUAL_HOST=other.domain -e LETSENCRYPT_HOST="other.domain" -e LETSENCRYPT_EMAIL="me#email.com" -d gitea/gitea
I expect drone to run test on git push

I finally found a solution but I don't know why it's working but I used Docker-Compose instead of docker run & it's working pretty well

Related

How to Pass Job Parameters to aws-glue-libs Docker Container?

I'm running and developing AWS Glue Job on Docker Container (https://gallery.ecr.aws/glue/aws-glue-libs) and I need to pass Job Parameter so that I can catch it using getResolvedOptions as in production. other than that I also need to give --additional-python-modules job parameter to install some libraries.
I know I can using pip inside the container, but I want to make it as similar as possible with the production. I also use docker-compose to run the container
version: '3.9'
services:
datacop:
image: public.ecr.aws/glue/aws-glue-libs:glue_libs_4.0.0_image_01
container_name: aws-glue
tty: true
ports:
- 4040:4040
- 18080:18080
environment:
- AWS_PROFILE=${AWS_PROFILE}
- DISABLE_SSL=true
volumes:
- ~/.aws:/home/glue_user/.aws
- ./workspace:/home/glue_user/workspace
I don't use docker-compose, but docker run. I'm just adding the params last in the command to spark-submit.
glue_main.py is my script I want to execute. Also --JOB_NAME <some name> is required, if that is in use inside the script, which it's usually are.
/home/glue_user/spark/bin/spark-submit glue_main.py --foo bar --baz 123 --JOB_NAME foo_job
This is my full command:
docker run --rm -i \
-e AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID \
-e AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY \
-e AWS_SESSION_TOKEN=$AWS_SESSION_TOKEN \
-e AWS_REGION=${AWS_REGION} \
-e AWS_DEFAULT_REGION=$AWS_DEFAULT_REGION \
-p 4040:4040 \
-e DISABLE_SSL="true" \
-v $(pwd)/${LOCAL_DEPLOY_DIR}:${REMOTE_DEPLOY_DIR} \
--workdir="${REMOTE_DEPLOY_DIR}" \
--entrypoint "/bin/bash" \
amazon/aws-glue-libs:glue_libs_4.0.0_image_01 \
/home/glue_user/spark/bin/spark-submit ${SCRIPT_NAME} --py-files site-packages.zip ${JOB_ARGS}

Enable fine grained on keycloak with docker

I have set up keycloak using docker, my problem is that I need to do some modifications on the clients that need the fine grained to be enabled. I have read the documentation and i know I should use the parameter -Dkeycloak.profile=preview or -Dkeycloak.profile.feature.admin_fine_grained_authz=enabled. My problem is that I tried to use that on my docker execution command, but with no luck
docker run --rm \
--name keycloak \
-p 80:8080 \
-e KEYCLOAK_USER=admin \
-e KEYCLOAK_PASSWORD=[adminPass] \
-e PROXY_ADDRESS_FORWARDING=true \
-e DB_VENDOR=MYSQL \
-e DB_ADDR=[SQL_Server] \
-e DB_DATABASE=keycloak \
-e DB_USER=[DBUSER] \
-e DB_PASSWORD=[DB_PASS] \
-e JDBC_PARAMS=useSSL=false \
-e -Dkeycloak.profile.feature.admin_fine_grained_authz=enabled \
jboss/keycloak
any help?
It is documented in the Docker image readme https://hub.docker.com/r/jboss/keycloak
Additional server startup options (extension of JAVA_OPTS) can be configured using the JAVA_OPTS_APPEND environment variable.
So in your case:
-e JAVA_OPTS_APPEND="-Dkeycloak.profile=preview"
Guess you might need to pass the environment variables to the JVM when starting the Wildfly containing the Keycloak WAR. There is a runner shell script that starts when launching the container. You need to add your environment variables to that call.

How to completely erase a Docker container of GitLab Server from machine?

While writing an automated deployment script for a self-hosted GitLab server I noticed that my uninstallation script does not (completely) delete the GitLab server settings, nor repositories. I would like the uninstaller to completely remove all traces of the previous GitLab server installation.
MWE
#!/bin/bash
uninstall_gitlab_server() {
gitlab_container_id=$1
sudo systemctl stop docker
sudo docker stop gitlab/gitlab-ce:latest
sudo docker rm gitlab/gitlab-ce:latest
sudo docker rm -f gitlab_container_id
}
uninstall_gitlab_server <some_gitlab_container_id>
Observed behaviour
When running the installation script, the GitLab repositories are preserved, and the GitLab root user account password is preserved from the previous installation.
Expected behaviour
I would expect the docker container and hence GitLab server data to be erased from the device. Hence, I would expect the GitLab server to ask for a new root password, and I would expect it to not display previously existing repositories.
Question
How can I completely remove the GitLab server that is installed with:
sudo docker run --detach \
--hostname $GITLAB_SERVER \
--publish $GITLAB_PORT_1 --publish $GITLAB_PORT_2 --publish $GITLAB_PORT_3 \
--name $GITLAB_NAME \
--restart always \
--volume $GITLAB_HOME/config:/etc/gitlab \
--volume $GITLAB_HOME/logs:/var/log/gitlab \
--volume $GITLAB_HOME/data:/var/opt/gitlab \
-e GITLAB_ROOT_EMAIL=$GITLAB_ROOT_EMAIL -e GITLAB_ROOT_PASSWORD=$gitlab_server_password \
gitlab/gitlab-ce:latest)
Stopping and removing the containers doesn't remove any host/Docker volumes you may have mounted/created.
--volume $GITLAB_HOME/config:/etc/gitlab \
--volume $GITLAB_HOME/logs:/var/log/gitlab \
--volume $GITLAB_HOME/data:/var/opt/gitlab \
You need to rm -rf $GITLAB_HOME

elasticsearch metricbeats docker install error

Hello everyone I'm new in elk stack.
I'm trying to run elk in docker with metricbeats. But unfortunately I have a problem with metricbeat setup.
docker run \
docker.elastic.co/beats/metricbeat:7.9.1 \
setup -E setup.kibana.host=ELK-IP-Address:5601 \
-E output.elasticsearch.hosts=["ELK-IP-Address:9200"] \
-E output.elasticsearch.username=elastic \
-E output.elasticsearch.password=changeme
When I run that code in my terminal I have that error.
zsh: no matches found: output.elasticsearch.hosts=[elasticsearch:9200]
Please help me:(
Add a backslash \ behind each of the square brackets, i.e. output.elasticsearch.hosts=\[elasticsearch:9200\]

How to deploy neoload in docker

Please does anyone know how to deploy neoload on Docker. I have looked at the neoload package on docker hub but it doesn't seem to make much sense. I want to use it for performance testing. the link is https://hub.docker.com/r/neotys/neoload-controller/
As explained in the documentation, there are 2 ways to deploy your neoload controller on docker:
Managed: this mode only works with a neoload web.
Standalone: basically when you run your neoload container, you give it some parameters like the neoload project, the number of virtual users etc... The test is launched at the start of the container.
From the docker hub documentation:
docker run -d --rm \
-e PROJECT_NAME={project-name} \
-e SCENARIO={scenario} \
-e NTS_URL={nts-url} \
-e NTS_LOGIN={login:password} \
-e COLLAB_URL={collab-url} \
-e LICENSE_ID={license-id} \
-e VU_MAX={vu-max} \
-e DURATION_MAX={duration-max} \
-e NEOLOADWEB_URL={nlweb-onpremise-apiurl:port} \
-e NEOLOADWEB_TOKEN={nlweb-token} \
-e PUBLISH_RESULT={publish-result} \
neotys/neoload-controller
You either have to pull the license from a Neoload Web or a NTS server.
I will need more informations about your problem to help you.
Regards

Resources