When running gerritcodereview/gerrit docker container. Gerrit is installed within the /var/gerrit directoy in the container. But when trying to install plugins by docker cp the plugin .jar file, downloaded from https://gerrit-ci.gerritforge.com/job/plugin-its-jira-bazel-stable-2.16/ into the /var/gerrit/plugins directory, plugins are not showing up in the list amongst installed plugins. Eventhough I restarted the container.
I ran gerrit with:
docker run -ti -p 8080:8080 -p 29418:29418 gerritcodereview/gerrit
And Gerrit is accessible via:
http://localhost:8080/admin/plugins
I also have a list of plugins in the plugins manager, but don't know how to add more plugins to the list, have tried to use gerrit-ci.gerritforge.com url in [httpd]. http://localhost:8080/plugins/plugin-manager/static/index.html
My gerrit.config file looks like this:
[gerrit]
basePath = git
serverId = 62b710a2-3947-4e96-a196-6b3fb1f6bc2c
canonicalWebUrl = http://10033a3fe5b7
[database]
type = h2
database = db/ReviewDB
[index]
type = LUCENE
[auth]
type = DEVELOPMENT_BECOME_ANY_ACCOUNT
[sendemail]
smtpServer = localhost
[sshd]
listenAddress = *:29418
[httpd]
listenUrl = http://*:8080/
filterClass = com.googlesource.gerrit.plugins.ootb.FirstTimeRedirect
firstTimeRedirectUrl = /login/%23%2F?account_id=1000000
[cache]
directory = cache
[plugins]
allowRemoteAdmin = true
[container]
javaOptions = "-Dflogger.backend_factory=com.google.common.flogger.backend.log4j.Log4jBackendFactory#getInstance"
javaOptions = "-Dflogger.logging_context=com.google.gerrit.server.logging.LoggingContext#getInstance"
user = gerrit
javaHome = /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.212.b04-0.el7_6.x86_64/jre
javaOptions = -Djava.security.egd=file:/dev/./urandom
[receive]
enableSignedPush = false
[noteDb "changes"]
autoMigrate = true
I am pretty sure that Gerrit runs from /var/gerrit, even for your version as that is the version I used before.
Why don't you use docker-compose together with a custom Dockerfile. This way you can easily recreate your image and don't need to worry about adding plugins again after you upgrade your version.
I would suggest that you play around with these scripts and use it for your testing.
This is what my Dockerfile looks like for my previous 2.16 installation:
FROM gerritcodereview/gerrit:2.16.8
# Add custom plugins that are not downloaded from the web
COPY ./plugins/* /var/gerrit/plugins/
# Add logo
COPY ./static/* /var/gerrit/static/
ADD https://gerrit-ci.gerritforge.com/view/Plugins-stable-2.16/job/plugin-avatars-gravatar-bazel-master-stable-2.16/lastSuccessfulBuild/artifact/bazel-genfiles/plugins/avatars-gravatar/avatars-gravatar.jar /var/gerrit/plugins/
USER root
# Fix any permissions
RUN chown -R gerrit:gerrit /var/gerrit
USER gerrit
ENV CANONICAL_WEB_URL=https://gerrit.mycompoany.net/r/
And below the docker-compose.yml
version: '3.4'
services:
gerrit:
build: .
ports:
- "29418:29418"
- "8080:8080"
restart: unless-stopped
volumes:
- /external/gerrit2.16/etc:/var/gerrit/etc
- /external/gerrit2.16/git:/var/gerrit/git
- /external/gerrit2.16/index:/var/gerrit/index
- /external/gerrit2.16/cache:/var/gerrit/cache
- /external/gerrit2.16/logs:/var/gerrit/logs
- /external/gerrit2.16/.ssh:/var/gerrit/.ssh
# entrypoint: java -jar /var/gerrit/bin/gerrit.war init --install-all-plugins -d /var/gerrit
# entrypoint: java -jar /var/gerrit/bin/gerrit.war reindex -d /var/gerrit
Finally found out a way that works for me in my use case.
copy content of your public key and insert into ssh web browser profile settings: my_gerrit_admin_username
Add key to ssh-agent:
eval `ssh-agent`
ssh-add .ssh/id_rsa
from terminal outside container, run:
ssh -p 29418 my_gerrit_admin_username#localhost gerrit plugin install -n its-base.jar https://gerrit-ci.gerritforge.com/job/plugin-its-base-bazel-stable-2.16/lastSuccessfulBuild/artifact/bazel-bin/plugins/its-base/its-base.jar
check web browser that plugin is installed among plugins.
Related
I deployed a MERN app to a digital ocean droplet with Docker. If I run my docker-compose.yml file local on my PC it works well. I have 2 containers: 1 backend, 1 frontend. If I try to compose-up on droplet, it seems the frontend is ok but can't communicate with backend.
I use http-proxy-middleware, my setupProxy.js file:
const { createProxyMiddleware } = require('http-proxy-middleware');
module.exports = function (app) {
app.use(
'/api',
createProxyMiddleware({
target: 'http://0.0.0.0:5001',
changeOrigin: true,
})
);
};
I tried target: 'http://main-be:5001', too, as main-be is the name of my backend container, but get the same error. Just the Request URL is http://main-be:5001/api/auth/login in the chrome/devops/network.
...also another page:
My docker-compose.yml file:
version: '3.4'
networks:
main:
services:
main-be:
image: main-be:latest
container_name: main-be
ports:
- '5001:5001'
networks:
main:
volumes:
- ./backend/config.env:/app/config.env
command: 'npm run prod'
main-fe:
image: main-fe:latest
container_name: main-fe
networks:
main:
volumes:
- ./frontend/.env:/app/.env
ports:
- '3000:3000'
command: 'npm start'
My Dockerfile in the frontend folder:
FROM node:12.2.0-alpine
COPY . .
RUN npm ci
CMD ["npm", "start"]
My Dockerfile in the backend folder:
FROM node:12-alpine3.14
WORKDIR /app
COPY . .
RUN npm ci --production
CMD ["npm", "run", "prod"]
backend/package.json file:
"scripts": {
"start": "nodemon --watch --exec node --experimental-modules server.js",
"dev": "nodemon server.js",
"prod": "node server.js"
},
frontend/.env file:
SKIP_PREFLIGHT_CHECK=true
HOST=0.0.0.0
backend/config.env file:
DE_ENV=development
PORT=5001
My deploy.sh script to build images, copy to droplet...
#build and save backend and frontend images
docker build -t main-be ./backend & docker build -t main-fe ./frontend
docker save -o ./main-be.tar main-be & docker save -o ./main-fe.tar main-fe
#deploy services
ssh root#46.111.119.161 "pwd && mkdir -p ~/apps/first && cd ~/apps/first && ls -al && echo 'im in' && rm main-be.tar && rm main-fe.tar &> /dev/null"
#::scp file
#scp ./frontend/.env root#46.111.119.161:~/apps/first/frontend
#upload main-be.tar and main-fe.tar to VM via ssh
scp ./main-be.tar ./main-fe.tar root#46.111.119.161:~/apps/thesis/
scp ./docker-compose.yml root#46.111.119.161:~/apps/first/
ssh root#46.111.119.161 "cd ~/apps/first && ls -1 *.tar | xargs --no-run-if-empty -L 1 docker load -i"
ssh root#46.111.119.161 "cd ~/apps/first && sudo docker-compose up"
frontend/src/utils/axios.js:
import axios from 'axios';
export const baseURL = 'http://localhost:5001';
export default axios.create({ baseURL });
frontend/src/utils/constants.js:
const API_BASE_ORIGIN = `http://localhost:5001`;
export { API_BASE_ORIGIN };
I have been trying for days but can't see where the problem is so any help highly appreciated.
I am no expert on MERN (we mainly run Angular & .Net), but I have to warn you of one thing. We had an issue when setting this up in the beginning as well worked locally in containers but not on our deployment servers because we forgot the basic thing about web applications.
Applications run in your browser, whereas if you deploy an application stack somewhere else, the REST of the services (APIs, DB and such) do not. So referencing your IP/DNS/localhost inside your application won't work, because there is nothing there. A container that contains a WEB application is there to only serve your browser (client) files and then the JS and the logic are executed inside your browser, not the container.
I suspect this might be affecting your ability to connect to the backend.
To solve this you have two options.
Create an HTTP proxy as an additional service and your FE calls that proxy (set up a domain and routing), for instance, Nginx, Traefik, ... and that proxy then can reference your backend with the service name, since it does live in the same environment than API.
Expose the HTTP port directly from the container and then your FE can call remoteServerIP:exposedPort and you will connect directly to the container's interface. (NOTE: I do not recommend this way for real use, only for testing direct connectivity without any proxy)
Have a problem adding authentication due to a new needs while using Apache NiFi (NiFi) without SSL processing it in a container.
The image version is apache/nifi:1.13.0
It's said that SSL is unconditionally required to add authentication. It's recommended to use tls-toolkit in the NiFi image to add SSL. Worked on the following process:
Except for environment variable nifi.web.http.port for HTTP communication, and executed up the standalone mode container with nifi.web.https.port=9443
docker-compose up
Joined to the container and run the tls-toolkit script in the nifi-toolkit.
cd /opt/nifi/nifi-toolkit-1.13.0/bin &&\
sh tls-toolkit.sh standalone \
-n 'localhost' \
-C 'CN=yangeok,OU=nifi' \
-O -o $NIFI_HOME/conf
Attempt 1
Organized files in directory $NIFI_HOME/conf. Three files keystore.jks, truststore.jsk, and nifi.properties were created in folder localhost that entered the value of the option -n of the tls-toolkit script.
cd $NIFI_HOME/conf &&
cp localhost/*.jks .
The file $NIFI_HOME/conf/localhost/nifi.properties was not overwritten as it is, but only the following properties were imported as a file $NIFI_HOME/conf/nifi.properties:
nifi.web.http.host=
nifi.web.http.port=
nifiweb.https.host=localhost
nifiweb.https.port=9443
Restarted container
docker-compose restart
The container died with below error log:
Only one of the HTTP and HTTPS connectors can be configured at one time
Attempt 2
After executing the tls-toolkit script, all files a were overwritten, including file nifi.properties
cd $NIFI_HOME/conf &&
cp localhost/* .
Restarted container
docker-compose restart
The container died with the same error log
Hint
The dead container volume was also accessible, so copied and checked file nifi.properties, and when did docker-compose up or restart, it changed as follows:
The part I overwritten or modified:
nifi.web.http.host=
nifi.web.http.port=
nifi.web.http.network.interface.default=
#############################################
nifi.web.https.host=localhost
nifi.web.https.port=9443
The changed part after re-executing the container:
nifi.web.http.host=a8e283ab9421
nifi.web.http.port=9443
nifi.web.http.network.interface.default=
#############################################
nifi.web.https.host=a8e283ab9421
nifi.web.https.port=9443
I'd like to know how to execute the container with http.host, http.port empty. docker-compose.yml file is as follows:
version: '3'
services:
nifi:
build:
context: .
args:
NIFI_VERSION: ${NIFI_VERSION}
container_name: nifi
user: root
restart: unless-stopped
network_mode: bridge
ports:
- ${NIFI_HTTP_PORT}:8080/tcp
- ${NIFI_HTTPS_PORT}:9443/tcp
volumes:
- ./drivers:/opt/nifi/nifi-current/drivers
- ./templates:/opt/nifi/nifi-current/templates
- ./data:/opt/nifi/nifi-current/data
environment:
TZ: 'Asia/Seoul'
########## JVM ##########
NIFI_JVM_HEAP_INIT: ${NIFI_HEAP_INIT} # The initial JVM heap size.
NIFI_JVM_HEAP_MAX: ${NIFI_HEAP_MAX} # The maximum JVM heap size.
########## Web ##########
# NIFI_WEB_HTTP_HOST: ${NIFI_HTTP_HOST} # nifi.web.http.host
# NIFI_WEB_HTTP_PORT: ${NIFI_HTTP_PORT} # nifi.web.http.port
NIFI_WEB_HTTPS_HOST: ${NIFI_HTTPS_HOST} # nifi.web.https.host
NIFI_WEB_HTTP_PORT: ${NIFI_HTTPS_PORT} # nifi.web.https.port
Thank you
I used docker-compose to run a local gitlab server.
git:
container_name: git-server
image: gitlab/gitlab-ce:latest
hostname: 'gitlab.example.com'
ports:
- '8090:80'
- '22:22'
volumes:
- "$PWD/srv/gitlab/config:/etc/gitlab"
- "$PWD/srv/gitlab/logs:/var/log/gitlab"
- "$PWD/srv/gitlab/data:/var/opt/gitlab"
networks:
- net
I want to setup custom-hooks for a project repo i created in the gitlab webUI so that it triggers a jenkins job. As per gitlab documentation, this is the path for repos in omnibus installations where i will have to create custom-hooks directory
/var/opt/gitlab/git-data/repositories/<group>/<project>.git
But inside of /var/opt/gitlab/git-data/repositories , I don't see a group directory or project directory at all
root#gitlab:~# ls -lt /var/opt/gitlab/git-data/repositories
total 0
drwxr-s---. 3 git root 16 Apr 18 04:05 #hashed
drwxr-sr-x. 3 git root 17 Apr 18 04:00 +gitaly
root#gitlab:~#
I tried searching using find. But it returned nothing. I tried searching by name of files in my project repo, but that didn't return anything as well.
In the gitlab webUI, I am able to see it all. But in the server, none of the file and dir exists.
How is it that I am not able to find any of the file in my repos when i ssh to gitlab-server?
Since I am not able to go this way, I tried by creating a post-receive.d directory under the global hooks directory /opt/gitlab/embedded/service/gitlab-shell/hooks and then adding my post-receive file as below
#!/bin/bash
# Get branch name from ref head
if ! [ -t 0 ]; then
read -a ref
fi
IFS='/' read -ra REF <<< "${ref[2]}"
branch="${REF[2]}"
if [ "$branch" == "master" ]; then
crumb=$(curl -u "jenkins:1234" -s 'http://jenkins:8080/crumbIssuer/api/xml?xpath=concat(//crumbRequestField,":",//crumb)')
curl -u "jenkins:1234" -H "$crumb" -X POST http://jenkins:8080/job/maven/build?delay=0sec
if [ $? -eq 0 ] ; then
echo "*** Ok"
else
echo "*** Error"
fi
jenkins is the name of the container which is in the same network as that of gitlab server.
gitlab docs says then I will have to change permission of the file to git and then make it executable. I did so. But it didn't work either. Also, I find all of the git directories is owned by root in my container.
After pushing code, i figured the hook I put in the /opt/gitlab/embedded/service/gitlab-shell/hooks/post-receive.d directory is not working and in the logs, I see below error right after I push code changes to my maven repo
==> /var/log/gitlab/nginx/gitlab_error.log <==
2020/04/18 04:57:31 [crit] 832#0: *256 connect() to unix:/var/opt/gitlab/gitlab-workhorse/socket failed (13: Permission denied) while connecting to upstream, client: <my_public_ip>, server: gitlab.example.com, request: "GET /jenkins/maven.git/info/refs?service=git-receive-pack HTTP/1.1", upstream: "http://unix:/var/opt/gitlab/gitlab-workhorse/socket:/jenkins/maven.git/info/refs?service=git-receive-pack", host: "gitlab.example.com:8090"
Here, gitlab.example.com is mapped to my public ip in the /etc/hosts file of my host on which I am running docker.
If you run the following command inside of the container you should see your group repos
gitlab-rake gitlab:storage:rollback_to_legacy
inside of /var/opt/gitlab/git-data/repositories , I don't see a group directory or project directory at all
The documentation "Install GitLab using docker-compose" includes the following volumes:
volumes:
- '$GITLAB_HOME/gitlab/config:/etc/gitlab'
- '$GITLAB_HOME/gitlab/logs:/var/log/gitlab'
- '$GITLAB_HOME/gitlab/data:/var/opt/gitlab'
That means if you see locally some repos in $GITLAB_HOME/gitlab/data/git-data/repositories, you should see the same in /var/opt/gitlab/git-data/repositories/.
Assuming, of course, that you have created at least one projet/repo in your GitLab instance.
I am unable to build the jekyll site I cloned with git due to a permission error. I am using Ubuntu 18.04. I've looked at most SO posts regarding this error, but none of the solutions have worked for me.
The command to build the site is docker-compose up. Running this command with or without sudo does not change the error. I am in the docker group. I am able to build the site using bundle exec jekyll serve. This command successfully creates the _site folder.
I tried adding the _site manually, but this results in a different error.
$ docker-compose up
Starting site ... done
Attaching to site
site | Configuration file: /srv/jekyll/_config.yml
site | Source: /srv/jekyll
site | Destination: /srv/jekyll/_site
site | Incremental build: disabled. Enable with --incremental
site | Generating...
site | Jekyll Feed: Generating feed for posts
site | jekyll 3.8.5 | Error: Permission denied # dir_s_mkdir - /srv/jekyll/_site
site exited with code 1
docker-compose.yml
version: '3'
services:
doc:
image: jekyll/jekyll:3.8
volumes:
- .:/srv/jekyll
container_name: site
ports:
- "4000:4000"
stdin_open: true
tty: true
command: bash -c "bundle install && bundle exec jekyll serve --host 0.0.0.0"
I am expecting to get no errors and have the _site directory successfully created.
I have the same issue, what I understand from the problem, is that the image jekyll/jekyll:3.8 create a group and user called jekyll under the UID 1000 which is the first user from Ubuntu(in my case).
When you try to run your compose file with another user where is different from the user UID 1000 the volume you mount has the owner from this current user, which is not the same UID of jekyll, then when your service try to run something he has no permission.
To solve that I found in this repository this environment variables:
JEKYLL_UID = 1000
JEKYLL_GID = 1000
adding in the compose file the env variables and changing the value 1000 to the UID from the current user.
environment:
JEKYLL_UID: 1001
JEKYLL_GID: 1001
This works fine to me.
To print your current UID :
$ echo $UID
1001
I'm trying to run my tests throught travis-ci, but i receive
"file not found tests" error. When i run local with same command everything is ok, but in travis i receive this error. I think its because tests folder which in root of my project located somewhere in other directory - in directory where travis copy github repo.
I try this settings in tox.ini but none of whem help:
commands = py.test $TRAVIS_BUILD_DIR/tests {posargs}
passenv = TRAVIS_BUILD_DIR
commands = py.test $TRAVIS_BUILD_DIR/tests {posargs}
commands = py.test {env:TRAVIS_BUILD_DIR:.}/tests {posargs}
My setup:
My .travis.yml
sudo: required
language: python
services:
- docker
script:
- docker-compose run tox
My docker compose file
version: '2'
services:
db:
image: postgres
tox:
build: .
depends_on:
- db
volumes:
- ".:/src:ro"
My tox.ini
[tox]
envlist = {py27}-{django18,django19,django110,django111},{py35}-{django18,django19,django110,django111,django20}
skipsdist = {env:TOXBUILD:false}
[testenv]
sitepackages = False
deps=
pytest==2.9.2
pytest-capturelog==0.7
pytest-django==2.9.1
psycopg2==2.7.3.2
pytest-pep8==1.0.6
freezegun==0.3.9
pytz==2017.3
django18: Django>=1.8,<1.9
django19: Django>=1.9,<1.10
django110: Django>=1.10,<1.11
django111: Django>=1.11,<2.0
django20: Django>=2.0,<2.1
commands = py.test tests {posargs}
My pytest.ini
[pytest]
addopts=-l -q --capture=no
pep8ignore = E501
norecursedirs = .robe .idea
python_files = tests.py test_*.py
DJANGO_SETTINGS_MODULE = test_app.settings
My dockerfile:
FROM themattrix/tox