I was using the following command to restore from a backup:
pv backup.sql.gz | gunzip | pg_restore -d $DBHOST
This was helpful because I can roughly see how far a long the backup has to finish.
I recently moved the DB into a docker container and wanted to be able to run the same restore, but have been struggling getting the command to work. I'm assuming I have to redirect the gunzip output somehow, but haven't been having any luck. Any help would be appreciated.
Related
my project is running inside a docker container - web_container and I need to get a way to get web_container's logs through the project
i tried running the command docker logs web_container >> file.log;, but as I understand it, the command is not recognized inside the docker container
is there any way to get the logs while in the container?
Logs are stored on host, so you cannot access them in container. But it is possible to mount (docker run -v /var/lib/docker/containers:/whereever/you/want2/mount:ro) the folder inside the container (read-only preferred).
By default it is here /var/lib/docker/containers/[container-id]/[container-id]-json.log.
While the container ID you can obtain with cat /proc/self/cgroup | grep -o -e "docker-.*.scope" | head -n 1 | sed "s/docker-\(.*\).scope/\\1/" from inside the container. (Maybe depends on your system, anyways it is in /proc/self/cgroup).
Remark:
This is a technically working answer to your question. For most use-cases the comments of David and The Fool are the more elegant way solving that.
I am trying to perform a COPY of large chunk of data from s3 to Redshift. It works normally from my mac via psql, but when I try to run it from a docker container running locally (using docker-airflow), I always get this error:
SSL SYSCALL error: EOF detected The connection to the server was lost.
Attempting reset: Succeeded.
Here is example on how I run it locally:
# First, I connect using psql
psql -h <connection_string> -u meh -d database -p 5439
# Then I issue this command.
COPY test.test from 's3://data/manifest_uuid' with credentials ''
FORMAT AS JSON 'auto' TRUNCATECOLUMNS COMPUPDATE ACCEPTINVCHARS manifest MAXERROR 100;
Within airflow container, same query is executed using psycopg2:
conn = psycopg2.connect(dbname=database, host=endpoint, port=port, user=user, password=password, sslmode='require')
with conn.cursor() as cur:
cur.execute(q, args)
if fetch_one:
result = cur.fetchone()
if result is None:
return None
return result
elif fetch_all:
return cur.fetchall()
else:
conn.commit()
Here is how I try to run it from container:
# I try to connect to the container
docker exec -it `docker ps|grep worker|awk 'END {print $1}'` /bin/bash
And then I run exactly the same as I do locally.
I can connect to the container, run psql from there and do all sorts of queries, and even the COPY command works, if the file is small enough.
I tried following https://docs.aws.amazon.com/redshift/latest/mgmt/connecting-firewall-guidance.html#change-tcpip-settings, and I verified that the options suggested are set to the values suggested, but I still get this issue.
UPDATE: Here is a gist with a dockerfile. To reproduce you also need a redshift cluster and some 1-2gb file with json that can be moved to Redshift with COPY command. If you create a container from that image, connect to it and try to run copy command, you would most likely see the same issue.
https://gist.github.com/drapadubok/da04548dace5d4ff4198631841322402
This is caused by a recent regression bug in Docker for Mac, which makes TCP connections time out.
Updating Docker to v17.12.0-ce-mac49 fixes the problem, as confirmed by the OP.
what I did
neo4j console
(work fine)
ctrl-C
upon restarting I have message above.
I delete /var/lib/neo4j/data/databases/graph.db/store_lock
then I have
Externally locked: /var/lib/neo4j/data/databases/graph.db/neostore
Is there any way of cleaning lock ? (short of reinstalling)
Killing the Java process and deleting the store_lock worked for me:
Found the lingering process,
ps aux | grep "org.neo4j.server"
killed it,
kill -9 <pid-of-neo4js-java-process>
and deleted
sudo rm /var/lib/neo4j/data/databases/graph.db/store_lock
Allegedly, just killing the lingering process may do the trick but I went ahead and deleted the lock anyway.
You can kill the java process and delete the store_lock file. It doesn't seem to harm the database integrity.
I found this question having same error message trying to import CSV using neo4j-admin tool.
In my case the problem was that I first launched neo4j server:
docker run -d --name testneo4j -p7474:7474 -p7687:7687 -v /path/to/neo4j/data:/data -v /path/to/neo4j/logs:/logs -v /path/to/neo4j/import:/var/lib/neo4j/import -v /path/to/neo4j/plugins:/plugins --env NEO4J_AUTH=neo4j/test neo4j:latest
and then tried to launch import (CSV data files see here):
docker exec -it testneo4j neo4j-admin import --nodes=Movies=import/movies.csv --nodes=Actors=import/actors.csv --relationships=ACTED_IN=import/roles.csv
This leads to lock error since server acquires database lock and neo4j-admin is independent tool which needs to acquire database lock too. Kills, lock file removals and sudos didn't work for me.
What helped:
docker run --rm ... neo4j:latest neo4j-admin ... - this performs one-shot import into empty database. No dead container remains, only imported data in external volume. (Note the command fails if db is not empty.) The point is, the Docker entrypoint starts server unless CMD in Dockerfile is overriden.
docker run -d ... neo4j:latest - this runs neo4j server
running a Centos 7.1.1503 docker container, when adding a few lines of code (node.js) it crashes with the error:
/bin/sh: line 1: 6 Segmentation fault (core dumped) node --inspect server.js
the file /proc/sys/kernel/core_pattern contains the following:
|/usr/libexec/abrt-hook-ccpp %s %c %p %u %g %t e
There's no /var/spool/abrt directory within the container. The /var/spool/abrt directory on the server running the containers doesn't get anything.
I can't change the /proc/sys/kernel/core_pattern to point to another directory/program because of the read-only fs thing. Can't run the container in privileged, either :-(
I've read through tonnes of docker/stackexchange and other docs and can't figure out where/how to get the core dump?
In the olden days I'd play with the settings and wreck a replica of the machine, but this is a production container and I'm very limited in what I can do and when/how many times I can crash it :-(
Host is RHEL 7.1, docker version is 1.7
EDIT: On my laptop, running the same container (with docker 1.12 though), I sometimes get core dumps on the host /var/spool/abrt by running sleep 60 & in the container, then running (still in the container) kill -ABRT <pid of the sleep 60> . By "sometimes" I mean that trying again doesn't always work... I'm not sure why, but about 2 out of 3 tries succeed. I figure this might have to do with a privileged run or something..? I run the container with docker run -it centos bash. If I can understand this I might replicate this behavior in the production env.
Execute the following command to get a report of the paths of the upper layer of the filesystem of all the centos containers you may have launched:
docker ps -a | grep centos | awk '{print $1}' | xargs docker inspect | grep UpperDir | cut -d\" -f4
Bear in mind that you will have to become sudo to be able to access them (run sudo su before cd'ing)
The command above does the following:
Get a report of all the containers existing in your host
Select only the ones that have centos in their line
Get the first row of that report (container ID)
Get the inspect of every one of those containers
Look for the UpperDir parameter (upper layer of your container filesystem, and the one you tinkered with when your process crashed)
Cut the UpperDir string for improved presentation
After that, you are on your own. I am afraid I am of no help with the crash itself. But if you are still doubtful, write me some lines and I'll do my best ot help.
I hope this helps you!
I ended up skipping the abrt and changing the core_pattern file to out to a directory on the host. Here's my two bytes on getting a core dump out of a crashing docker instance:
On the host:
docker run --privileged -it -v /tmp:/core image-name bash
(you can do this with docker exec, but my machine didn't have the flags available to exec)
--privileged = required to be able to edit the /proc/sys/kernel/core_pattern file
-v = to mount the /tmp directory of the host in a /core directory in the container
In the instance:
set the location of the core dumps to /core (which is a mount of the /tmp dir in the host machine):
echo "/core/core-%e-%s-%u-%g-%p-%t" > /proc/sys/kernel/core_pattern
test it:
sleep 60 &
kill -SEGV <pid of that sleep process>
Should be able to see a core file in the /tmp dir on the host. When my instance crashed, I finally got the dump in the host machine.
What CLI commands do I need to use in order to check if the image in my private docker registry is a newer version than the one currently running on my server?
E.g. I have a container that I ran using docker run -d my.domain.com:5000/project1
and I would like to know if it is out-of-date.
Brownie points to #mbarthelemy and #amuino who put me on track. From that I was able to come up with the following bash script that others may find useful. It just checks if the tag on the registry is different from the currently executing container.
#!/bin/bash
# ensure running bash
if ! [ -n "$BASH_VERSION" ];then
echo "this is not bash, calling self with bash....";
SCRIPT=$(readlink -f "$0")
/bin/bash $SCRIPT
exit;
fi
REGISTRY="my.registry.com:5000"
REPOSITORY="awesome-project-of-awesomeness"
LATEST="`wget -qO- http://$REGISTRY/v1/repositories/$REPOSITORY/tags`"
LATEST=`echo $LATEST | sed "s/{//g" | sed "s/}//g" | sed "s/\"//g" | cut -d ' ' -f2`
RUNNING=`docker inspect "$REGISTRY/$REPOSITORY" | grep Id | sed "s/\"//g" | sed "s/,//g" | tr -s ' ' | cut -d ' ' -f3`
if [ "$RUNNING" == "$LATEST" ];then
echo "same, do nothing"
else
echo "update!"
echo "$RUNNING != $LATEST"
fi
Even when there is no command, you can use the API to check for tags on the registry and compare against what you are running.
$ curl --silent my.domain.com:5000/v1/repositories//project1/tags | grep latest
{"latest": "116f283e4f19716a07bbf48a562588d58ec107fe6e9af979a5b1ceac299c4370"}
$ docker images --no-trunc my.domain.com:5000/project1
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
my.domain.com:5000 latest 64d935ffade6ed1cca3de1b484549d4d278a5ca62b31165e36e72c3e6ab8a30f 4 days ago 583.2 MB
By comparing the ids, you can know that you are not running the latest version.
Not sure about the version but if you mean the tag of image, it can be easily checked through the registry v2 api . Note that in context of docker images tag has nothing to do with the version of software.
Use curl command in CLI
curl <docker_host_ip>:<docker_host_port>/v2/<repository_name>/tags/list
To get a list of repositories pushed on the private registry, use
curl <docker_host_ip>:<docker_host_port>/v2/_catalog
AFAIK, this is not possible right now.
The only thing I see would be to pull the registry to check if there is a new version of your image (would then have a different ID than your locally stored image):
docker pull your/image:tag
But yes, that would mean fetching the new images (if any).
If you have a look at the registry API documentation, you'll see that if you don't mind scripting a bit, you could get this information without actually downloading the image, by fetching the image tags and check if the returned ID for the tag matches the ID of the local image you have with the same tag.
That being said, having something to "check for updates" integrated into the docker CLI would be a nice addition.
I don't know if this works as advertised. Just a quick hack I just put together.
But this will at least give you a little push on how this might be done.
#!/bin/bash
container=$1
imageid=$(docker inspect --format '{{.Config.Image}}' ${container})
echo "Running version from: $(docker inspect --format '{{.Created}}' ${container})"
echo "Image version from: $(docker inspect --format '{{.Created}}' ${imageid})"
Example output:
[root#server ~]# sh version_check.sh 9e500019b9d4
Running version from: 2014-05-30T08:24:08.761178656Z
Image version from: 2014-05-01T16:48:24.163628504Z
You can use a bash script running in a cron scheduled task:
#!/bin/bash
docker_instance='YOUR_RUNNING_INSTANCE'
instance_id=$(docker ps -qa --filter name=$docker_instance)
image_name_tag=$(docker inspect $instance_id | jq -r [] |.Config.Image')
if [ "-${image_name_tag}-" != "--" ]; then
status=$(docker pull $image_name_tag | grep "Downloaded newer image")
if [ "-${status}-" != "--" ]; then
echo ">>> There is one update for this image ... "
# stop the docker instance
docker stop $docker_instance
# remove the docker instance
docker rm $docker_instance
# restart the docker using the last command, using the new image from the remote repository
run-my-docker-instance.sh
fi
fi
An older question, but this sounds like a problem that Watchtower can solve for you. It is another dockerized application that runs adjacent to your other containers and periodically check to see if their base images are updated. When they are, it downloads the new image and restarts them.
If given the correct credentials, it can work with a local registry.
FWIW I solved it with the bash script below for a while until I decided that Watchtower was the easier way to go (by the way: note the maintainer switched from v2tec to containrrr a while ago, the v2tec one isn't getting updates anymore). Watchtower gave me an easy way to schedule things without having to rely on cron (which gets blown away in a reinstall - granted, you could have something like Ansible recreate that for you, but this was easier for me). It also adds easy notifications (I like using Telegram) for updates, which I appreciate knowing about so that if something goes sideways at least I know there was an update that could be to blame.
I'm not saying this is will never cause issues, but I've been running Watchtower on various Docker hosts (3 of them, 2 in my homelab, one on Linode) for about a year now and I have yet to have an issue with it. I prefer this to having to manually update my containers on a regular basis. For me the risk of something getting screwed up is lower than the risks of running outdated containers, so this is what I chose for myself. YMMV.
I honestly don't get the apparent hate for automated update solutions like Watchtower - I see so many comments saying that you shouldn't use automated updates because they'll cause problems... I don't know what folks have been burned by - would love to hear more about where this caused problems for you! I mean that, I genuinely don't understand and would love to learn more. I keep having some vague unease about doing automated updates, but given my experience so far I can honestly only recommend it. I used to use Diun for getting notified about updates and then would go and manually update my containers. That got real old after it became a daily chore! (With ~45 different containers running, you can pretty much guarantee that at least one of them will have an update every single day.)
If I really need to keep a container from updating, I can always stick a com.centurylinklabs.watchtower.enable=false label on the container. Or you can whitelist only the containers you want automatically updated, or... There are loads of possibilities with Watchtower.
However, for reference if you still want to use it, see my script below. I used docker-compose pull to get the latest version - it does a check first to see if there is a new image, so doesn't waste a whole lot of bandwidth if there is nothing to update. It's effectively like doing the curl you guys used. Also I prefer the docker inspect -f commands to check the versions to the solutions that pipe through grep, sed, and co. since that is less likely to get broken by changes to docker inspect output format.
#!/usr/bin/env bash
cd /directory/with/docker-compose.yml/
image_name=your-awesome-image
docker-compose pull
container_version=$(docker inspect -f '{{ index .Config.Labels "org.opencontainers.image.version" }}' "$image_name")
latest_image_version=$(docker inspect -f '{{ index .Config.Labels "org.opencontainers.image.version" }}' "$image_name")
if [[ "$container_version" != "$latest_image_version" ]]; then
echo "Upgrading ${image_name} from ${container_version} to ${latest_image_version}"
docker-compose down
docker-compose up -d
fi