How to solve error EOF connecting to InfluxDB? - docker

I'm using InfluxDB 1.8 and Grafana through docker for months without any problem until today. Suddenly I can't access InfluxDB. The error I get is:
Failed to connect to http://localhost:8086: Get http://localhost:8086/ping: EOF
Please check your connection settings and ensure 'influxd' is running.
Docker is running, it's checked. Some tests I did were to restart docker, change the port, run InfluxDB without docker, and finally try it without the databases (all empty).
It only works when I delete the databases, but then I lose all the content. I thought that maybe some file is corrupt but I don't know which one. Any idea how to fix this error?
Thanks in advance
EDIT: Well, I finally deleted the corrupted file but the EOF error persists. However, now if I run the verify tool there are no broken blocks. Maybe it is because it cannot be deleted directly and there are references to its content somewhere?

If you suspect that the files might be corrupted, you could make use of following tools to verify the integrity of TSM files:
influx_inspect verify -dir <storage_root>
See more details here.

Related

How to disable 'auditd' in DOCKER

I donn't want/need 'auditd' in my LINUX.
Problem is that this not a standard LINUX distribution rather a special, minimized version called 'CoreElec' (www.coreelec.org)
So there is no 'apt', 'auditctl'. crontab is empty.
Docker is issues the auditing. If I stop Docker, auditing messages also stop.
I can find no way to stop it or at least to stop its logging, resulting in a syslog full of audit log messages.
If I could at least get rid of logs, it would help.
Thanks for any help.
Gerry

Datagrip DB connection error - Can not read response from server. Expected to read 4 bytes, read 0 bytes before connection was unexpectedly lost

I saw this issue when I started to work on the project, but didn't try to find a solution before. For now, I think, this is a real issue and try to know its reason, but couldn't find possible answers.
Reproducing the issue
Docker and MySQL were integrated for backend development(Elixir based).
Docker-compose up to build container and use DB by seeding large testing data - there is no problem for using dockerized backend and DB at this point.
Restart PC. Docker-compose up to use an existing container.
Now, an exception error shows in Datagrip:
java.io.EOFException: Can not read response from server. Expected to read 4 bytes, read 0 bytes before connection was unexpectedly lost.
So, I had to Docker-compose stop, Docker-compose down to remove container and after that, run Docker-compose up to use DB properly.
Not sure why it is impossible to use an existing container after restarting PC.
Please provide any solution as I am having a difficulty to seed large testing data everytime I restart PC.

Creating a volume size limit in docker that enforces limit - without first downloading whole huge file and only afterwards saying download failed?

I'm trying to create a container disk size limit in docker. Specifically, I have a container that downloads data, and I want this data to be under a limit, that I can cap beforehand.
So far, what I've created works on the surface-level, (prevents the file from actually being saved onto the computer) - however I can watch the container doing it's work, and I can see the download complete to 100%, before it says 'Download failed.' Therefore it seems like it's downloading to a temporary directory, and then checking the size of the file before passing it to the final location. (or not)
This doesn't fully resolve the issue I was trying to fix, because obviously the download consumes a lot of resources. I'm not sure what exactly I am missing here..
This is what creates the above behavior:
sudo zfs create new-pool/zfsvol1
sudo zfs set quota=1G new-pool/zfsvol1
docker run -e "TASK=download" -e "AZURE_SAS_TOKEN= ... " -v /newpool/zfsvol1:/data containerName azureFileToDownload
I got the same behavior while running the container interactively without volumes and downloading into the container. I tried changing the storage driver (inside $docker info) to zfs (from overlay) and it didn't help. I looked into docker plugins but they didn't seem like they would resolve the issue.
This is all run inside an Ubuntu VM; I made a zfs pool to test all of this. I'm pretty sure this is not supposed to happen because it's not very useful. Would anyone have an idea why this is happening?
Ok- so I actually figured out what was going on, and like #hmm suggested the problem wasn't because of Docker. The place it was buffering to was my memory, before downloading to the disk, and that was the issue. It seems like azcopy (Azure's copy command) first downloads to memory before saving to the disk, which is not great at all, but there is nothing to be done about it in this case. I think my approach itself works completely.

Docker Desktop Kubernetes Unable to connect to the server: EOF

Earlier today I had increased my Docker desktop resources, but when ever since it restarted Kubernetes has not been able to complete its startup. Whenever I try to run a kubectl command, I get Unable to connect to the server: EOF in response.
I had thought that it started because I hadn't deleting a helm chart before adjusting the resource values in Settings, thus said resources having been assigned to the pods instead of the Kubernetes api server. But I have not been able to fix this issue.
This is what I have tried thus far:
Restarting Docker again
Reset Kubernetes
Reset Docker to factory settings
Deleting the VM in hyper-v and restarting Docker
Uninstalling and reinstalling Docker Desktop
Deleting the pki folder and restart Docker
Set the Environment Variable for KUBECONFIG
Deleting .kube/config and restart
Another clean reinstall of Docker Desktop
But Kubernetes does not complete its startup, so I still get Unable to connect to the server: EOF in response.
Is there anything I haven't tried yet?
I'll share that what solved this for me was Docker Desktop settings feature for "reset kubernetes cluster". I know that #shenyongo said that a "reset kubernetes" didn't work, and I suppose they mean this.
But for the sake of other readers who may find this, I had this same error message (with Docker Desktop on Windows 11, using wsl2), and the solution for me was indeed to do this:
open the Settings page (in Docker Desktop--right-click on it in the status tray)
then choose "Kubernetes" on the left
then choose "reset kubernetes cluster"
Yes, that warns that "all stacks and kubernetes resources will be deleted", but as nothing else had worked for me (and I wasn't worried about losing much), I tried it, and it did the trick. In moments, all my k8s functionality was back to working.
As background, k8s had been working fine for me for some time. It was just that one day I found I was getting this error. I searched and searched and found lots of folks asking about it but not getting answers, let alone this answer. To be clear, like the OP here I had tried restarting Docker Desktop, restarting the host machine, even downloading and installing an available DD update (I was only a bit behind), and none of those worked. I didn't proceed to ALL the steps shenyongo did, as I thought I'd try this first, and the reset worked.
Hope that may help others. I realize some may fear losing something, but this helps stress the power of declarative vs imperative k8s configuration. It SHOULD be easy to recreate most everything if necessary. I realize it may not be so for everyone.

Re-running docker-compose in Windows says network configuration changed

I have docker-compose version 1.11.2 on Windows and using a version 2.1 docker-compose.yml but whenever I try to run something like docker-compose up or docker-compose run a subsequent time, I get an error that the network needs to be recreated because configuration options changed (even if I didn't change anything). I can docker network rm to remove the network, but from other documentation and posts about docker-compose on Linux it seems this is unnecessary.
I can reproduce this reliably but can't really find any further information. Can anyone explain why I keep getting errors to recreate the network (using a transparent driver to download some stuff when building the image, but even using the nat driver gives me a similar error) or at least how to work around it? One of my scenarios is to be able to use docker-compose run on one of the services a couple of times on the same machine as part of cloud build/test.
Turns out this was a bug and was fixed in a subsequent update several weeks ago. I was told by one of the Docker developers that Windows 10 Creators Update was required as well.

Resources