after some time of trying, I managed to get InfluxDB and Grafana to play together in my Docker environment, and then I had a look into my InfluxDB bucket. By all the things I can see it doesn’t look that HA is actually writing anything to that bucket.
Going through the UI of InfluxDB I see there are buckets and sources, etc. and I wonder if I have to somehow add HA there as a source.
On the other side I have my configuration.yaml in my HA and there it looks like this
influxdb:
host: 192.168.1.110
port: !secret influx_port
database: home_assistant
username: !secret influx_username
password: !secret influx_password
Any way on how I can figure out if HA is actually writing to the bucket, or can you already tell that I do not write anything because I am missing an essential part?
In the standard HomeAssistant installation there is an entity called Sun, with an entity id sun.sun. I would expect this entity to be logged in the database, but I cant find it there.
The HomeAssistant logs show the following error:
InfluxDB database is not accessible due to '401: {"code":"unauthorized","message":"Unauthorized"}'. Please check that the database, username and password are correct and that the specified user has the correct permissions set.
The name of the database is correct, the username and password are the one I use to login to InfluxDB
When I look for directory rights on the influxdb docker container, then they belong to a DSM user, who is in the user group.
Changing the information in my secrets.yaml for the credentials of the DSM user leads to the same error message I received before.
I am running
Home Assistant 2023.1.7
Frontend 20230110.0 - latest
and
InfluxDB v2.6.1
Alright, for those interested... I managed.
SInce I do not have any certificates (yet), the connection is running on HTTP, and version 2 of InfluxDB is by default pointing at HTTPS. Hence, I added a simple
ssl: false
to the configuration file.
Then I got an error message basically saying the bucket "Home Assistant" was not found. No wonder, that's not the name of the bucket... So, in v2, you do not specify a database(name) in the configuration, but a bucket. Initially I was expecting the token to clarify that, but that's not the case, and I added the line
bucket: !secret influx_bucket
to my configuration and defined the name of the bucket in the secrets file.
Checked the configuration file, restarted HA, and Bob is your uncle...
Related
I am trying to deploy a custom SQL Server 2019 docker image from my docker hub repository to a kubernetes cluster (aks) but not able to log in to the DB instance from outside. It says login for user 'sa' failed.
I have verified the password requirements and literally tried using the same used in Microsoft docs but still can't log in to SQL Server.
Tried using sqlcmd and Azure Data Studio and I know it is reaching the server because the errorlog has the following error:
Error: 18456, Severity: 14, State: 8Login failed for user 'sa'. Reason: Password did not match that for the login provided.
Tested the same passwords, in my local docker environment and incidently, all of them gave the same error whenever I spun the container. After a few options, I used a simple password which worked locally and in k8s as well.
Have raised an SR with MS to understand password policy reqs. as to why some passwords didn't work. Event the one provided in their docs.
Thanks #Larnu and #Aaron Bertrand for your time and inputs.
I have a keycloak docker image and I import the configuration of my realm from a json file. And it works, so far so good.
But in my configuration there is an LDAP provider, which doesn't have the right credentials (Bind DN and Bind Credentials). They are not inserted in the JSON due to security purposes. So I have to manually insert the credentials in the Admin Console after startup.
I am now trying to find a secure way to automate that without exposing the credentials in clear text, so that we don't have to manually insert the credentials after each startup.
I thought about inserting them in the JSON file inside the container with a shell script or whatever and then importing the resulting file when starting keycloak. The problem is that the credentials would then be exposed in clear text in the JSON file inside the container. So anybody with access to the container would be able to see them.
I'm thinking about inserting the credentials in that JSON file based on environment variables (these are securely stored in the Gitlab runner and masked in the logs), starting keycloak and then removing the JSON file on the fly after keycloak successfully starts without exposing the credentials in any of the layers. But I couldn't find a way to do that.
Can anybody think of an idea of how this can be achieved?
Any help would be much appreciated.
A workaround is to bind your keycloak instance to an external database with a persistent volume (examples from keycloak here) and to change the migration strategy from OVERWRITE_EXISTING, to IGNORE_EXISTING (documentation here) in your docker-compose, like this:
command: '-b 0.0.0.0 -Dkeycloak.migration.strategy=IGNORE_EXISTING'
In this way, your configuration is persistent so you just enter your LDAP credentials the first time and don't need complex operations with pipelines.
I'm brand new to containers and am trying to set up a MediaWiki on a Synology NAS. The Synology comes with a package for MediaWiki but it is at 1.30 and they haven't updated in a year. I need a newer version so i can use LDAP with latest extensions.
So, i found this step-by-step guide on how to install the containers with docker. I'm trying it with MediaWiki 1.34.0 and it works fine up to the point that we test connection to the mysql database - 5) Input your MySQL container name and its root password.
When i click Continue i get this error: Cannot access the database: :real_connect(): (HY000/2054): The server requested authentication method unknown to the client. Check the host, username and password and try again. If using "localhost" as the database host, try using "127.0.0.1" instead (or vice versa).
It seems to be that the mediawiki container and the mediawiki-mysql containers aren't networked. I'm looking under network and it shows the following, so they should be able to communicate. I can ping a 172.26.0.2 and 172.26.0.3 address but can't figure how to get past step 5) in that go-by.
I've tried everything i can think of. Using older versions of MediaWiki (e.g. 1.31) and mysql but this connection problem is the sticking point each time. I've reached limit of my capabilities here.
It seems to be that the mediawiki container and the mediawiki-mysql containers aren't networked
Would be interesting where this assumption is coming from. From what I read from the error message, your containers can perfectly fine communicate to each other (they should, as they seem to be on the same network, given that the mediawiki-mysql container is also on a bridged network and in the same subnet).
Let's take a look at the interesting part of the error message:
The server requested authentication method unknown to the client
That looks, to me, as a misconfiguration of mysql. I assume you're using the latest version of the mysql docker container, which should be some version of mysql 8. If you now google for this, you'll find plenty of posts even on stackoverflow, like:
https://stackoverflow.com/a/53881212/3394281
php mysqli_connect: authentication method unknown to the client [caching_sha2_password]
To fix this with your current dataset, you could change the authentication plugin from socket to password:
Log in as root to mysql
Run this sql command:
ALTER USER 'root'#'localhost' IDENTIFIED WITH mysql_native_password
BY 'password';
Replace 'password' with your root password. In case your application does not log in to your database with the root user, replace the 'root' user in the above command with the user that your application uses.
Or, if you're using docker-compose or can change the executed command somehow else, you could follow this answer:
Add the following line to the command:
--default-authentication-plugin=mysql_native_password
Florian's answer put me on the right trail even though it didn't work as he initially suggested (I'm marking his as correct answer). I changed the root plugin (his item 2. above) but still did not work. So, I did the same on all of the users shown with the SELECT user, authentication_string,plugin,host FROM mysql.user;.
After, that i ran a FLUSH PRIVILEGES; and then was able to complete the MediaWiki 1.34.0 installation (via http://xxx.xxx.xxx.xxx:8080).
I suspect that all i really needed to do was run that ALTER USER on the two root accounts (root#localhost and root#%) but it is working now so i'm leaving it as-is. Here is a good link that will help with these commands.
I am using DDEv and Docker with Windows 10 pro to set up a localhost install of drupal 8.8 using Composer. I have set up and configured the local drupal installation (it is a fresh install) and it appears to be running correctly, but in the admin section of the drupal site I receive a warning to change write permissions of sites/default/settings.php.
I tried to change settings using Filezilla, but it appears that local files in Filezilla do not provide access to write permissions? When I right-click the file in Filezilla, no permissions option appears.
Following troubleshooting tips from ddev, I tried to access phpmyadmin at https://mysitename.ddev.site:8036
Instead of loading phpmyadmin, I got the following error message:
Secure Connection Failed
An error occurred during a connection to dmckimep.ddev.site:8036. SSL received a record that exceeded the maximum permissible length.
Error code: SSL_ERROR_RX_RECORD_TOO_LONG
The page you are trying to view cannot be shown because the authenticity of the received data could not be verified.
Please contact the website owners to inform them of this problem.
I've been searching around for a couple of hours now and do not find a solution to this. I ran ddev describe and all seems fine with the installation. The drupal site in the container seems to run okay. There are no port conflicts present so far as I have found, so I am not sure why I cannot get access to phpmyadmin.
I am a relative newbie in terms of skills, but have successfully maintained drupal 4-7 on localhost with XAMPP and my web host. Now I am wrestling with the move to drupal 8/composer/docker/ddev. Any suggestions would be much appreciated.
Thank you!
Update 2022-09-14: DDEV has had https support fpr PHPMyAdmin and MailHog for years now, ddev describe will show you the URL.
(Original answer) ddev's PHPMyAdmin connection doesn't support https, just http. You can find the links for both PHPMyAdmin and MailHog using ddev describe; both are http-only, as in your example, http://mysitename.ddev.site:8036. It would be possible to provide https URLs for PHPMyAdmin and MailHog, but nobody has ever asked for them, and there's no security reason to do so.
Note that the key reason for https on the actual project URL is because real projects run behind https and people need to see problems like mixed content during the development phase. But there's no such need for PHPMyAdmin. However, I'm sure if people ever want it, we'll do it, it's not hard to do.
Just as a general add on, after ddev start you can run ddev launch -p in order to open PHPMyAdmin for the current project database in the browser.
Is there a way of creating an user in InfluxDB with authentication enabled? Disclaimer: I am a novice to InfluxDB.
I created a Docker container running InfluxDB with authentication enabled by setting auth-enabled = true in http section of the influxdb.conf file.
[http]
...
# Determines whether user authentication is enabled over HTTP/HTTPS.
auth-enabled = true
...
As there are no users, I tried to create one using the following command:
docker exec influxdb influx -execute "create user admin with password 'blabla' with all privileges"
However, this fails with
"stdout": "ERR: error authorizing query: no user provided
So it is kind of a chicken-and-egg problem. You cannot create a user, because this requires logging in as a user in the first place.
It works when authentication is disabled. So I can do the following:
Create config with authentication disabled.
Start InfluxDB
Create users
Change config so authentication is now enabled.
Restart InfluxDB
but in that case I have to store the config in a specific Docker volume and it still leaves a window when anybody could log in without authentication. So it can be automated, but it is not an elegant solution.
Is there an elegant solution for this problem?
Most DB images provide a way to configure an admin-user and admin-passwort via environment variables. InfluxDB does this too:
https://hub.docker.com/_/influxdb/
Set the environment variables INFLUXDB_ADMIN_USER and INFLUXDB_ADMIN_PASSWORD in your container to create the admin user with the given password. You can also enable auth by an environment variable INFLUXDB_HTTP_AUTH_ENABLED
2021 update: apparently there might be some caveats/edge cases as it comes to automatic admin/user creation in InfluxDB in Docker - see here: https://github.com/influxdata/influxdata-docker/issues/232
If you stamp on the following message: "create admin user first or disable authentication" even if you set envs as suggested by #adebasi then the above link might help you tackle the problem.
I've just checked the latest official InfluxDB docker and it works, however, as stated in the above link, if meta directory is present (even if empty) under /var/lib/influxdb then user won't be created.
There's also another case - while using unofficial InfluxDB docker suitable for RaspberryPi Zero (https://hub.docker.com/r/mendhak/arm32v6-influxdb) this functionality of creating users is not present there or at least didn't work for me (I've checked docker image and I saw no code to create users).