How to enable Redmines 'adding of issues' when using Docker? - docker

I have the official postgres image running as a docker container, as well as the official redmine 3.3.1 container linked to the postgres container. All my data is being persisted and Redmine appears to be working fine, with one exception.
I have no way of adding issues within Redmine. I have the modules enabled and my user has manager, dev, and reporter roles and perms. I've also added admin to the user, but still a no go.
I suspect this problem has something to do with using Docker containers since I don't have the issue when running directly on the file system (no containers).
Thoughts?
Edit: (adding commands)
docker run -d --name postgres \
-v /home/me/redmine/postgresql:/var/lib/postgresql/data \
-e POSTGRES_DB=redmine \
-e POSTGRES_USER=redmine \
-e POSTGRES_PASSWORD=secret postgres
docker run -d -p 3000:3000 --name redmine \
-v /home/me/redmine/files:/usr/src/redmine/files \
--link postgres:postgres redmine

Redmine has no notion of the underlying infrastructure, so if the "new issue" button isn't shown, that has nothing to do with missing write permissions on the file system or database level, for instance.
If you can login, then your Redmine has already successfully performed database UPDATEs, and creating an issue does not need anything else, so you'd be looking in the wrong place when checking your Docker config.
I am almost certain you're missing permissions or a default configuration (e.g. issue statuses, issue priorities, roles, trackers, workflows, etc.), as mentioned in my comment above.
I am assuming you don't have any relevant data yet in your Redmine database. If that's the case, please try the following.
WARNING: This deletes all Redmine data.
export RAILS_ENV=production - set the environment assuming that your Docker image is built for a production Redmine, otherwise try development.
bundle exec rake db:drop - delete the database
bundle exec rake db:create - recreate an empty database
bundle exec rake db:migrate - recreate the schema
bundle exec rake redmine:load_default_data - this is the crucial part which I suspect has been missed last time, it creates all necessary objects required to successfully work with your Redmine, e.g. create issues!

Related

Docker : Database is uninitialized and superuser password is not specified

Am new to docker, started learning today.
I am using Windows Terminal to run the docker command.
I ran this command - docker run postgres:10.20
Got an error stating -
Error: Database is uninitialized and superuser password is not specified.
You must specify POSTGRES_PASSWORD to a non-empty value for the
superuser. For example, "-e POSTGRES_PASSWORD=password" on "docker run".
You may also use "POSTGRES_HOST_AUTH_METHOD=trust" to allow all
connections without a password. This is *not* recommended.
See PostgreSQL documentation about "trust":
https://www.postgresql.org/docs/current/auth-trust.html
Can anybody help me on how to add password to that command in the Terminal, Apart from running an environment to add password in docker-compose.yaml.
Am beginner to stack overflow as well and was not allowed to add image to provide a reference. Assistance needed
All you need to do is:
docker run -e POSTGRES_PASSWORD=password postgres:10.20
Here is the documentation for setting environment variables for docker run: https://docs.docker.com/engine/reference/run/#env-environment-variables

How to access PostgreSQL and MongoDB databases in wolkenkit

I am not able to connect with MongoDB and PostgreSQL. I am using the command below:
docker exec -it todomvc-mongodb mongo -user wolkenkit -p 576085646aa24f4670b929f0c47032ebf149e48f admin.
It shows the following result:
2018-08-14T11:48:20.592+0000 E QUERY [thread1] Error: Authentication failed. : –
I have tried to reproduce your issue. What I have done:
I cloned the wolkenkit-todomvc sample application.
I started it using wolkenkit start.
This gave me the (randomly created) shared key 4852f4430d67990c28354b6fafae449c4f82e9ab (please note that this value is different each time you run wolkenkit start, unless you set it explicitly to a value of your choice, so YMMV).
Then I tried:
$ docker exec -it todomvc-mongodb mongo -user wolkenkit -p 4852f4430d67990c28354b6fafae449c4f82e9ab admin
It actually doesn't work. The reason for this is that the parameter -user does not exist, it either has to be -u or --username. If you run:
$ docker exec -it todomvc-mongodb mongo -u wolkenkit -p 4852f4430d67990c28354b6fafae449c4f82e9ab admin
Then, things work as expected.
Hope this helps 😊

How to persist configuration & analytics across container invocations in Sonarqube docker image

Sonarqube official docker image, is not persisting any configuration changes like: creating users, changing root password or even installing new plugins.
Once the container is restarted, all the configuration changes disappear and the installed plugins are lost. Even the projects' keys and their previous QA analytics data is unavailable after a restart.
How can we persist the data when using Sonarqube's official docker image?
Sonarqube image comes with a temporary h2 database engine which is not recommended for production and doesn't persist across container restarts.
We need to setup a database of our own and point it to Sonarqube at the time of starting the container.
Sonarqube docker images exposes two volumes "$SONARQUBE_HOME/data", "$SONARQUBE_HOME/extensions" as seen from Sonarqube Dockerfile.
Since we wanted to persist the data across invocations, we need to make sure that a production grade database is setup and is linked to Sonarqube and the extensions directory is created and mounted as volume on the host machine so that all the downloaded plugins are available across container invocations and can be used by multiple containers (if required).
Database Setup:
create database sonar;
grant all on sonar.* to `sonar`#`%` identified by "SOME_PASSWORD";
flush privileges;
# since we do not know the containers IP before hand, we use '%' for sonarqube host IP.
It is not necessary to create tables, Sonarqube creates them if it doesn't find them.
Starting up Sonarqube container:
# create a directory on host
mkdir /server_data/sonarqube/extensions
mkdir /server_data/sonarqube/data # this will be useful in saving startup time
# Start the container
docker run -d \
--name sonarqube \
-p 9000:9000 \
-e SONARQUBE_JDBC_USERNAME=sonar \
-e SONARQUBE_JDBC_PASSWORD=SOME_PASSWORD \
-e SONARQUBE_JDBC_URL="jdbc:mysql://HOST_IP_OF_DB_SERVER:PORT/sonar?useUnicode=true&characterEncoding=utf8&rewriteBatchedStatements=true&useConfigs=maxPerformance" \
-v /server_data/sonarqube/data:/opt/sonarqube/data \
-v /server_data/sonarqube/extensions:/opt/sonarqube/extensions \
sonarqube
Hi #VanagaS and others landing here.
I just wanted to provide an alternative to the above. Maybe some would even consider it an easier one.
Notice this line SONARQUBE_HOME in the Dockerfile for the docker-sonarqube image. We can control this environment variable.
When using docker run. Simply do:
txt
docker run -d \
...
...
-e SONARQUBE_HOME=/sonarqube-data
-v /PERSISTENT_DISK/sonarqubeVolume:/sonarqube-data
This will make Sonarqube create the conf, data and so forth folders and store data therein. As needed.
Or with Kubernetes. In your deployment YAML file. Do:
txt
...
...
env:
- name: SONARQUBE_HOME
value: /sonarqube-data
...
...
volumeMounts:
- name: app-volume
mountPath: /sonarqube-data
And the name in the volumeMounts property points to a volume in the volumes section of the Kubernetes deployment YAML file.
This again will make Sonarqube use the /sonarqube-data mountPath for creating extenions, conf and so forth folders, then save data therein.
And voila your Sonarqube data is thereby persisted.
I hope this will help others.
N.B. Notice that the YAML and Docker run examples are not exhaustive. They focus on the issue of persisting Sonarqube data.
Since Sonarqube v7.9 , Mysql is not supported. One needs to use postgresql. Install Postgresql and configure to run on host ip rather than localhost, private ip is preferred.
Reference: https://www.digitalocean.com/community/tutorials/how-to-install-and-use-postgresql-on-ubuntu-18-04
postgres=# create database sonar;
postgres=# create user sonar with encrypted password 'mypass';
postgres=# grant all privileges on database sonar to sonar;
create a directory on host
mkdir /server_data/sonarqube/extensions
mkdir /server_data/sonarqube/data # this will be useful in saving startup time
Start the container
docker run -d
--name sonarqube
-p 9000:9000
-e SONARQUBE_JDBC_USERNAME=sonar
-e SONARQUBE_JDBC_PASSWORD=mypass
-e SONARQUBE_JDBC_URL=jdbc:postgresql://{host/private ip only}:5432/sonar
-v /server_data/sonarqube/data:/opt/sonarqube/data
-v /server_data/sonarqube/extensions:/opt/sonarqube/extensions
sonarqube
You may face this error when you do "docker logs container_id"
ERROR: [1] bootstrap checks failed [1]: max virtual memory areas
vm.max_map_count [65530] is too low, increase to at least [262144]
This is the fix, run on your host
sysctl -w vm.max_map_count=262144
In order to add hostname
edit /etc/postgresql/10/main/postgresql.conf
In order to add docker as client for postgres edit /etc/postgresql/10/main/pg_hba.conf
10 - postgres version used

Docker+Rails+Postgres: app can't access database container

New to Docker; trying to follow along with two COdeShip tutorials for Rails+Postgres inside Docker containers. Database container starts and stops with app container under control of docker-compose up -d and docker-compose rm. Postgres commands are not available inside app container, however, so docker-compose run app bin/rake db:create chokes reporting that createdb could not be found.
I've written this up more fully in this Gist.
I've been stuck for 3 days and counting; help would be greatly appreciated!

How to run Rails migrations and seeding in Amazon Elastic Beanstalk single container Docker environment

I'm working on deploying a Rails application to Elastic Beanstalk using docker and so far everything has worked out. I'm at the point where the application needs to run migrations and seeding of the database, and I'm having trouble figuring out exactly how I need to proceed. It appears that any commands in the /.ebextensions folder run in the context of the host machine and not the docker container. Is that correct?
I'm fine with running a command to execute migrations inside of the docker container after startup, but how do I ensure that the migrations only run on a single instance? Is there an environment variable or some other way I can tell what machine is the leader from within the docker container?
Update: I posted a question in the Amazon Elastic Beanstalk forums asking how to run "commands from Docker host on the container" on the 6th/Aug/15'. You can follow the conversations there as well as they are useful.
I'm not sure the solution you have proposed is going to work. It appears that the current process for EB Docker deployment runs container commands before the new docker container is running, which means that you can't use docker exec on it. I suspect that your commands will execute against the old container which is not yet taken out of service.
After much trial and error I got this working through using container commands with a shell script.
container_commands:
01_migrate_db:
command: ".ebextensions/scripts/migrate_db.sh"
leader_only: true
And the script:
if [ "${PROCESS}" = "WEB" ]; then
. /opt/elasticbeanstalk/hooks/common.sh
EB_SUPPORT_FILES=$(/opt/elasticbeanstalk/bin/get-config container -k support_files_dir)
EB_CONFIG_DOCKER_ENV_ARGS=()
while read -r ENV_VAR; do
EB_CONFIG_DOCKER_ENV_ARGS+=(--env "$ENV_VAR")
done < <($EB_SUPPORT_FILES/generate_env)
echo "Running migrations for aws_beanstalk/staging-app"
docker run --rm "${EB_CONFIG_DOCKER_ENV_ARGS[#]}" aws_beanstalk/staging-app bundle exec rake db:migrate || echo "The Migrations failed to run."
fi
true
I wrap the whole script in a check to ensure that migrations don't run on background workers.
I then build the ENV in exactly the same way that EB does when starting the new container so that the correct environment is in place for the migrations.
Finally I run the command against the new container which has been created but is not yet running - aws_beanstalk/staging-app. It exits at the end of the migration and the --rm removes the container automatically.
Update: This solution, though seemingly correct, doesn't work as intended (it seemed it was at first though). For reasons best explained in nmott's answer below. Will leave it here for posterity.
I was able to get this working using container_commands via the .ebextensions directory config files. Learn more about container commands here. And I quote ...
The commands in container_commands are processed in alphabetical
order by name. They run after the application and web server have been
set up and the application version file has been extracted, but before
the application version is deployed. They also have access to
environment variables such as your AWS security credentials.
Additionally, you can use leader_only. One instance is chosen to be
the leader in an Auto Scaling group. If the leader_only value is set
to true, the command runs only on the instance that is marked as the
leader.
So, applying that knowledge ... the container_commands.config will be ...
# .ebextensions/container_commands.config
container_commands:
01_migrate_db:
command: docker exec `docker ps -l -q -f 'status=running'` rake db:migrate RAILS_ENV=production
leader_only: true
ignoreErrors: false
02_seed_db:
command: docker exec `docker ps -l -q -f 'status=running'` rake db:seed RAILS_ENV=production
leader_only: true
ignoreErrors: false
That runs the migrations first and then seeds the database. We use docker exec [OPTIONS] CONTAINER_ID COMMAND [ARG...] which runs the appended COMMAND [ARG...] in the context of the existing container (not the host). And we get CONTAINER_ID by running docker ps -q.
Use .ebextensions/01-environment.config:
container_commands:
01_write_leader_marker:
command: touch /tmp/is_leader
leader_only: true
Now add directory /tmp to volumes in Dockerfile / Dockerrun.aws.json.
Then check set all initialization commands like db migration in sh script that first check if file /tmp/is_leader exists and executes them only in this case.
Solution 1: run migration when you start server
In the company I work for we have literally equivalent for this line to start the production server:
bundle exec rake db:migrate && bundle exec puma -C /app/config/puma.rb
https://github.com/equivalent/docker_rails_aws_elasticbeanstalk_demmo_app/blob/master/puppies/script/start_server.sh.:
https://github.com/equivalent/docker_rails_aws_elasticbeanstalk_demmo_app/blob/master/puppies/Dockerfile
And yes this is Load balanced environment (3 - 12 instances depending on load) and yes they all execute this script. (we do load balance by introducing 1 instance at a time during deployment)
The thing is the first batch of deployment (first instance up ) will execute the bundle exec rake db:migrate and run the migrations (meaning it will run the DB changes)
and then once done it will run the server bundle exec puma -C /app/config/puma.rb
The second deployment batch (2nd instance) will
also run the bundle exec rake db:migrate but will not do anything (as there are no pending migrations).
It will just continue to the second part of the script bundle exec puma -C /app/config/puma.rbo
So honestly I don't think this is the perfect solution but is pragmatic and works for our team
I don't believe there is any generic "best practice" for EB out there for Rails running migrations as some
application teams don't want to run the migrations after the deployment while others (like our team) they
do want to run them straight after deployment.
Solution 2: background worker Enviromnet to run migrations
if you have Worker like Delayed job, Sidekiq, Rescue on own EB enviroment you can configure them to run
the migrations:
bundle exec rake db:migrate && bundle exec sidekiq)
So first you willdeploy the worker and once the worker is deployed then deploy webserver that will not run the migrations
e.g.: just bundle exec puma
Solution 3 Hooks
I agree that using EB hoos ore ok far this but
honestly I use eb hooks only for more complex
devops stuff (like pulling ssl certificates for the Nginx web-server) not for running migrations)
anyway hooks were already covered in this SO question so I'll not repeat the solution. I will just reference this article that will help you understand them:
https://blog.eq8.eu/article/aws-elasticbeanstalk-hooks.html
Conclusion
It's really up to you to figure out what is the best for your application. But honestly EB is really simple tool
(compared to tools like Ansible or Kubernetes) No mater what you implement as long as it works its ok :)
One more helpful link for EB for Rails developers:
talk: AWS Elastic Beanstalk & Docker for Rails developers

Resources