How to mongoimport in a remote server (Authentication error) - mongoimport

With this code I tried to get json data on remote ubuntu server.
mongoimport --port 27017 -d dbtest -c shops --file myfile.json --jsonArray
but I only got an error.
Failed: (Unauthorized) command insert requires authentication
I tried another codes but I can't authenticate... It only works on localhost.
How can I solve this?

I solve It!
mongoimport -u username -p 'password' --port 27017 -d dbtest -c shops --file myfile.json --jsonArray --authenticationDatabase admin

Related

Error when loading sql file in postgres db using psql - schema already exists error

Error - ERROR: schema "public" already exists when running psql to load a sql dump file into database.
PGPASSWORD=xyzabc pg_dump -U postgres -h cloudsql-proxy -p 3306 -d development -n public --format=p >/dev-db/devBackup.sql
export PGPASSWORD=postgres
psql -U postgres -h local-dev-db -p 5432 --dbname pangea_local_dev -c 'CREATE EXTENSION IF NOT EXISTS citext with schema public;'
psql -U postgres -h local-dev-db -p 5432 pangea_local_dev -1 -n public -f /dev-db/devBackup.sql

InfluxDB and Grafana: InfluxDB Error: Bad Request | Docker

I am trying to connect Grafana with InfluxDB, but it throws
InfluxDB Error: Bad Request
Both i have in docker, and I am using this tutorial where he wrote download and run
docker pull influxdb
docker run \
-d \
--name influxdb \
-p 8086:8086 \
-e INFLUXDB_DB=sensordata \
-e INFLUXDB_ADMIN_USER=root \
-e INFLUXDB_ADMIN_PASSWORD=toor \
-e INFLUXDB_HTTP_AUTH_ENABLED=true \
influxdb
and about Grafana
docker pull grafana/grafana
docker run -d --name=grafana -p 3000:3000 grafana/grafana
In Grafana setting I wrote all as were show in tutorial
url: http://10.0.1.76:8086/
database: sensordata
user: root
passwd: toor
Could please somebody help me with this ? Thank you !
In the tutorial you pointed out, is using influxdb version prior to 2.0
Try
docker pull influxdb:1.8.4-alpine
and use this image to start your influxdb container and it should work.
Thanks

docker mysql, send sql commands during exec

i am creating a mysql 5.6 docker using bash script and i would like to change the password.
how can i send sql commands from bash to docker?
build:
sudo docker build -t mysql-5.6 -f ./.Dockerfile .
run.sh:
#!/bin/bash
sudo docker run --name=mysql1 -d mysql-5.6
sudo docker exec -it mysql1 mysql -uroot -p$base_password \
<<< SET PASSWORD FOR 'root'#'localhost' = PASSWORD('new_pass');
You need to bind MySQL-port like descriped here. To keep the port 3306 you can just expose it on your host the following way:
sudo docker run --name=mysql1 -p 3306:3306 -d mysql-5.6
After that you should be able to use mysql -u USER -p PASSWORD on your local host. This will then allow you to send commands to your docker-container.

Keycloak Docker HTTPS required

I have initialized https://hub.docker.com/r/jboss/keycloak/ on my Digital Ocean Docker Droplet.
$docker run -e KEYCLOAK_USER=admin -e -p 8080:8080 KEYCLOAK_PASSWORD={password with upcase etc.} jboss/keycloak
success
Everything worked well and the server started in the Droplets IP address on a port :8080.
Problems started when I entered the admin console from the UI in the URL. There was a message: "HTTPS required". This was a real issue and the only solution I have found is to login to the Keycloak from the console and to change the setting of HTTPS=required from admin console without the UI.
I then opened the bash for my Docker container :
$docker exec -it keycloak bash
success
As I entered my command to login in the keycloak/bin folder:
cd keycloak/bin
keycloak/bin $./kcadm.sh config credentials --server http://<droplet IP>:8080/auth --realm master --user admin --password {password with upcase etc.}
the bash freezes and gives a timeout message after some time
Reason for logging in from bash would be complete this:
keycloak/bin $ ./kcadm.sh update realms/master -s sslRequired=NONE.
which would hopefully solve the original problem of HTTPS required.
Update Feb 2022:
Keycloak 17+ (e.g. quay.io/keycloak/keycloak:17.0.0) doesn't support autogeneration of selfsigned cert. Minimal HTTPS working example for Keycloak 17+:
1.) Generate selfsigned domain cert/key (follow instructions on your terminal):
openssl req -newkey rsa:2048 -nodes \
-keyout server.key.pem -x509 -days 3650 -out server.crt.pem
2.) Update permissions for the key
chmod 755 server.key.pem
3.) Start Keycloak (use volumes for cert/key):
docker run \
--name keycloak \
-e KEYCLOAK_ADMIN=admin \
-e KEYCLOAK_ADMIN_PASSWORD=password \
-e KC_HTTPS_CERTIFICATE_FILE=/opt/keycloak/conf/server.crt.pem \
-e KC_HTTPS_CERTIFICATE_KEY_FILE=/opt/keycloak/conf/server.key.pem \
-v $PWD/server.crt.pem:/opt/keycloak/conf/server.crt.pem \
-v $PWD/server.key.pem:/opt/keycloak/conf/server.key.pem \
-p 8443:8443 \
quay.io/keycloak/keycloak:17.0.0 \
start-dev
Keycloak will be exposed on port 8443 with HTTPS protocol with this setup. If you use also proxy (e.g. nginx) you will need to configure also env variable KC_PROXY properly (e.g. KC_PROXY=edge). Of course you can use also keycloak.conf file instead of env variables.
Old answer for Keycloak up to 16.1.1 and Keycloak legacy 17+:
Publish port 8443 (HTTPS) and use it instead of 8080 (HTTP):
docker run \
--name keycloak \
-e KEYCLOAK_USER=myadmin \
-e KEYCLOAK_PASSWORD=mypassword \
-p 8443:8443 \
jboss/keycloak
Keycloak generates self signed cert for https in this setup. Of course, this is not a production setup.
Update
Use volumes for own TLS certificate:
-v /<path>/tls.crt:/etc/x509/https/tls.crt \
-v /<path>/tls.key:/etc/x509/https/tls.key \
This was a solution that also granted access to the admin console with no security when using https://hub.docker.com/r/jboss/keycloak/ as a starting point and DigitalOcean as service provider:
Start container:
$ docker run {containerName}
Open bash for container:
$ docker exec -it {containerName} bash
Move to:
$ cd keycloak/bin
create new admin user with:
$ ./add-user-keycloak.sh --server http://{IP}:8080/admin
--realm master --user admin --password newpassword
(not add-user.sh as suggested in many places)
Restart droplet in DigitalOcean etc. to activated admin user created prior to the shutdown. After restarting the droplet login with:
$ ./kcadm.sh config credentials --server http://localhost:8080/auth
--realm master --user admin
Changing ssl settings on the realm:
$ ./kcadm.sh update realms/master -s sslRequired=NONE
This solution does not create any security but allows you to access the Admin console.
After this it is suggested to start workin on this:
https://www.keycloak.org/docs/latest/server_installation/index.html#setting-up-https-ssl
The following sequence of commands worked for me
On the host VM:
docker run --name key -d -p 8080:8080 -e KEYCLOAK_USER=admin -e KEYCLOAK_PASSWORD=admin jboss/keycloak
docker exec -it key bash
Inside the container:
cd keycloak/bin/
./kcadm.sh config credentials --server http://localhost:8080/auth --realm master --user admin
Logging into http://localhost:8080/auth as user admin of realm master
Enter password: admin
./kcadm.sh update realms/master -s sslRequired=NONE
Just in case if someone wants to use it on a Docker Swarm using secrets to store the certificate files and admin credentials:
keycloak:
image: jboss/keycloak
container_name: keycloak-server
hostname: keycloak-server
ports:
- target: 8443 # Keycloak HTTPS port
published: 8443
mode: host
- target: 8080 # Keycloak HTTP port
published: 8080
mode: host
networks:
default:
aliases:
- keycloak-server
deploy:
replicas: 1
secrets:
- keycloak_user_file
- keycloak_password_file
- source: server_crt
target: /etc/x509/https/tls.crt
uid: '103'
gid: '103'
mode: 0440
- source: server_key
target: /etc/x509/https/tls.key
uid: '103'
gid: '103'
mode: 0440
environment:
- KEYCLOAK_USER_FILE=/run/secrets/keycloak_user_file
- KEYCLOAK_PASSWORD_FILE=/run/secrets/keycloak_password_file
secrets:
server_crt:
file: ./certs/server.crt
server_key:
file: ./certs/server.key
keycloak_user_file:
file: ./keycloak/adminuser
keycloak_password_file:
file: ./keycloak/adminpassword
Update after Jboss/Keyclok 12.0.0
Use following command in the server without login to docker container via bash.
$ docker exec <container_id> /opt/jboss/keycloak/bin/kcadm.sh update realms/master -s sslRequired=NONE --server http://localhost:8080/auth --realm master --user <admin_username> --password <admin_password>
Logging into http://localhost:8080/auth as user admin of realm master
Finally get it working with https (Keycloak 14.0.0) in the simplest way after trying innumerable ways.
Create a docker-compose.yml and DO NOT specify the volumes for cert and key:
version: '2'
services:
keycloak:
image: quay.io/keycloak/keycloak:14.0.0
command: -c standalone.xml
environment:
DB_VENDOR: h2
KEYCLOAK_USER: admin
KEYCLOAK_PASSWORD: admin
ports:
- 8080:8080
- 8443:8443
Run your docker-compose.yml with docker-compose up.
Wait over a minute and Keycloak will generate a self signed certificate automatically! you´ll see the logs on cli:
WARN [org.jboss.as.domain.management.security] (default I/O-3) WFLYDM0113: Generated self signed certificate at /opt/jboss/keycloak/standalone/configuration/application.keystore. Please note that self signed certificates are not secure, and should only be used for testing purposes. Do not use this self signed certificate in production.
Access your Keycloak server on port 8443.
If you don´t see the logs indicating the generation of the self signed certificate, just try to access your server including 'https://' and ':8443', like 'https://your_ip_or_dns:8443/auth'.
I also experienced bash freezing when trying to config credentials.
Adding the --password argument to the config credentials command resulted in a successful execution:
./kcadm.sh config credentials --server http://localhost:8080/auth --realm master --user admin --password {YOUR_PASSWORD_HERE}
Execute ./kcadm.sh config credentials for examples of secure/alternate ways to pass the argument.
For cases where Docker was used to build Keycloak. This worked for me:
docker exec -it demo-keycloak bash
/opt/jboss/keycloak/bin/kcadm.sh config credentials --server http://localhost:8080/auth --realm realmname --user admin --password admin
/opt/jboss/keycloak/bin/kcadm.sh update realms/realmname -s sslRequired=NONE
Explanation:
First line gives an interactive bash shell on the Keycloak container.
second and third line authenticates you and makes modification to the realm settings using the Keycloak admin-cli. There is no need for container restart
If you just want to disable HTTPS, you can with this
docker exec -it {contaierID} bash
cd keycloak/bin
./kcadm.sh config credentials --server http://localhost:8080/auth --realm master --user admin
./kcadm.sh update realms/master -s sslRequired=NONE
Pay attention to the image you use. If you use the quay.io/keycloak/keycloak. Your must explicitly specify the path that cert and key use KC_HTTPS_CERTIFICATE_FILE and KC_HTTPS_CERTIFICATE_KEY_FILE. A little different from the one jboss.

Dump remote MySQL database from a docker container

I'm trying to dump a remote database into a local docker container's database.
$ docker exec -i my_container \
mysqldump my_remote_database -h my_remote_host.me -u my_remote_user -p
This gives me the remote dump well
So here are my attempts:
$ docker exec -i my_container \
mysqldump my_remote_database -h my_remote_host.me -u my_remote_user -p \
| docker exec -i my_container mysql -u my_local_user -pmy_local_password \
-D my_local_database
$ docker exec -i my_container bash -c \
"mysqldump my_remote_database -h my_remote_host.pp -u my_remote_user -p \
| mysql -u my_local_user -pmy_local_password -D my_local_database"
Both don't seem to have any effect on the local database (no error though)
How can I transfer these data ?
I always like to hammer out problems from inside the container in an interactive terminal.
First, get the cantainer of image running and check to see the following:
If you bash onto the local container docker exec -ti my_container bash, does the remote hostname my_remote_host.me resolve and can you route to it? Use ping or nslookup.
From inside the interactive terminal bash shell can you connect to the remote db? Just try a vanilla mysql cli connect using the credentials
Finally try the dump from inside the interactive terminal and just create a mydump.sql dump file.
Then check that inside the container:
You can connect to the local DB the credetials provided (if using tcp connection not socket the hostname should resolve, but it looks like your local db is using a socket)
See if you can load the dump file using mysql -u my_local_user -pmy_local_password -D mylocaldb < mydump.sql
If this all works then you can start looking at why the pipe is failing, but I suspect the issue may be with one of the resolutions.
I notice you say local database. Is the 'local database' inside the container or on the Docker host running a socket connection?

Resources