I have installed Superset using the docker method, but now I can't connect to my MySQL database because the driver can't be loaded.
I have tried installing it via pip install mysqlclient, and it is reported as successfully installed :
Collecting mysqlclient
Using cached mysqlclient-1.4.6.tar.gz (85 kB)
Building wheels for collected packages: mysqlclient
Building wheel for mysqlclient (setup.py) ... done
Created wheel for mysqlclient: filename=mysqlclient-1.4.6-cp38-cp38-linux_x86_64.whl size=108116 sha256=b05681e22caca22b405d0b518651bb8849df47e31f124571dd8788d585dd522f
Stored in directory: /root/.cache/pip/wheels/8a/3c/e6/347e293dbcd62352ee2806f68d624aae821bca7efe0070c963
Successfully built mysqlclient
Installing collected packages: mysqlclient
Successfully installed mysqlclient-1.4.6
I have restarted the docker, but the driver still can't be loaded inside Superset.
What needs to be done? How can I install the missing MySQL driver so the docker container can see it and use it?
I noticed the requirements.txt file inside the installation directory, and added this line
mysqlclient==1.4.6
Then executed the command
docker-compose up --build
and now we're able to connect.
Related
I attempted to update Docker-Compose in Linux using the instructions on the Docker Website. I get the following when I type sudo apt install docker-compose-plugin
After this operation, 25.7 MB of additional disk space will be used.
Selecting previously unselected package docker-compose-plugin.
(Reading database ... 139053 files and directories currently installed.)
Preparing to unpack .../docker-compose-plugin_2.10.2~ubuntu-jammy_amd64.deb ...
Unpacking docker-compose-plugin (2.10.2~ubuntu-jammy) ...
Setting up docker-compose-plugin (2.10.2~ubuntu-jammy) ...
When I try to type docker compose it does not work. When I type docker-compose version I see:
docker-compose version 1.29.2, build unknown
docker-py version: 5.0.3
CPython version: 3.6.9
What step am I missing? I want to be able to use a 2.0 version with the docker compose type entry. Thank you.
For some reason, Docker was installed as a Snap Packaging rather than the suggested method on the Docker website to use apt. I could not find a way to snap install or update docker-compose on its own with Snap. I just uninstall docker with sudo snap remove docker, then reinstalled using the most up to date directions on Docker's site, using apt.
I'm attempting to learn how to create a Laravel Docker image by following a tutorial on DigitalOcean using WSL. Following the instructions on the Docker Hub page, however, yields an error:
❯ docker run --rm --interactive --tty -v $(pwd):/app composer install
Loading composer repositories with package information
Updating dependencies (including require-dev)
Package operations: 94 installs, 0 updates, 0 removals
- Installing voku/portable-ascii (1.4.10): Failed to download voku/portable-ascii from dist: Could not delete /app/vendor/voku/portable-ascii/src/voku/helper:
Now trying to download from source
- Installing voku/portable-ascii (1.4.10):
[RuntimeException]
Could not delete /app/vendor/voku/portable-ascii/src/voku/helper:
install [--prefer-source] [--prefer-dist] [--dry-run] [--dev] [--no-dev] [--no-custom-installers] [--no-autoloader] [--no-scripts] [--no-progress] [--no-suggest] [-v|vv|vvv|--verbose] [-o|--optimize-autoloader] [-a|--classmap-authoritative] [--apcu-autoloader] [--ignore-platform-reqs] [--] [<packages>]...
How can I diagnose what I'm doing wrong?
It turns out that the underlying problem had nothing to do with Docker at all. In fact, Composer was trying to tell me what the problem was all along, but I dismissed it as just a symptom of a deeper issue:
[RuntimeException]
Could not delete /app/vendor/voku/portable-ascii/src/voku/helper:
This message was the crux of it all. I noticed that the directory mentioned, [...]/helper, was empty, so I tried to remove it by hand with rmdir. Instead, I got a No such file or directory error message. I attempted many other was to kill this directory, the entire project directory with rm -rf ~/laravel-app from the home folder, etc. Nothing worked.
Some digging around on the internet suggested that it could be an NTFS corruption if I was running into this issue on Windows. Since I am, indeed, attempting this on WSL (Windows Subsystem for Linux), I gave their suggested fix a try: running chkdsk /F in CMD/PowerShell. A reboot was necessary to complete this task, but after getting everything back up and trying those first few tutorial steps again, I was able to get composer to install the Laravel dependencies without a hitch.
Bottom line: If you run into this sort of issue on WSL, please try running chkdsk /F and reboot. You might just have a similar file system corruption.
We have two possibilities for this error:
1 - You did not execute the command inside the directory :
cd ~/laravel-app
2 - I'm sure there is an internal proxy that is blocking the download of packages.
I'm trying to install ibm.csdk.4.50.FC3.LNX in a Docker container based on Ubuntu 18.
I run in the container the installation file as follows:
root#mycontainer:/usr/src/ibm.csdk.4.50.FC3.LNX# ./installclientsdk -i console
But I get this error:
One or more prerequisite system libraries are not installed on your
computer. Install libdl.so.2, libcrypt.so.1, libpam.so.0,
libstdc++.so.6, libm.so.6, libgcc_s.so.1, libc.so.6, libncurses.so.5
and then restart the IBM Informix installation program.
The installation cannot succeed until the minimum requirements are
met. For more information about the prerequisites, see your
Installation Guide or check with your System Administrator.
However those files are already in the container in the following paths:
/lib/x86_64-linux-gnu/libdl.so.2
/lib/x86_64-linux-gnu/libcrypt.so.1
/lib/x86_64-linux-gnu/libpam.so.0
/usr/lib/x86_64-linux-gnu/libstdc++.so.6
/lib/x86_64-linux-gnu/libm.so.6
/lib/x86_64-linux-gnu/libgcc_s.so.1
/lib/x86_64-linux-gnu/libc.so.6
/lib/x86_64-linux-gnu/libncurses.so.5
How can I install it?
Running apt install unixodbc-dev seems fixing.
You might want to install also unixodbc
We have similar issue where we are running shell script which runs dbaccess inside the docker container. but as we run the docker as root user it is trying to use root user to connect to the informix db server. is there a way we can configure user name and password for dbaccess to use the configured userId instead of root.
I am trying to setup docker-compose architecture for local development and production and I can't figure when in the containers life it's the best time to install library dependencies. In the same time I am not sure if these should be placed in the container or in external volume.
All my code is mounted in external volumes, so that changes are immidiately taken into without rebuilding the containers, but I am not sure about libraries that need to be installed by pip (I am running python backend) and npm/yarn (for webpack front-end).
Placing requirments.txt and package.json into the containers and running pip install and yarn install in the container build process means that I have to rebuild the container any time dependecies change - that is too much overhead.
Putting them in an external volume and running pip install and yarn install as part of the command of each container when it is started seems to solve the issue.
The build process of each container then contains only platform dependencies (eg. installing python, webpack or other platform tools), but libraries are installed after started (with CMD directive).
Is this the correct approach? I have seen lot of examples doing exactly the oposite and running npm install in the build process of the container - but I don't see any advantage for that, am I missing something?
Installing dependecies is usually part of the build process. Mounting code is a good trick when developing in order to get changes directly reflected.
Concerning adding requirements.txt or package.json. Installing dependecies takes time, and for that you need to take advantage of docker layer caching. In particular, you want to avoid cache invalidation.
For pip I suggest the following in development phase: For dependencies that you are unlikely to change, install these in separate RUN instuction. Your Docker file will look something like.
FROM ..
RUN pip install package1 package2 package3 ...
ADD requirements.txt requirements.txt
RUN RUN pip install -r requirements.txt
...
Keep only dependencies that might be changed in requirements.txt. Once you are done developing, add the packages back to the requirements.txt and build using the requirements file.
A similar approach would be adding two requirements files, and at the end combining them.
I have a suse linux 12 ec2 instance. I have activated a image sles11sp3-docker-image using sledocker. In the Dockerfile when I try to install ibm java 1.6 using
RUN zypper in java-1_6_0-ibm, I get following error .
Refreshing service 'container-suseconnect'.
Problem retrieving the repository index file for service 'container-suseconnect':
[|]
Skipping service 'container-suseconnect' because of the above error.
Warning: No repositories defined. Operating only with the installed resolvables. Nothing can be installed.
Loading repository data...
Reading installed packages...
'java-1_6_0-ibm' not found in package names. Trying capabilities.
Resolving package dependencies...
No provider of 'java-1_6_0-ibm' found.
Nothing to do.
The command '/bin/sh -c zypper in java-1_6_0-ibm' returned a non-zero code: 104
Please help
According to the docs (https://www.suse.com/documentation/sles-12/singlehtml/dockerquick/dockerquick.html), running zypper ref -s only gets you repo URLs with 12 hour tokens. Moreover, this command only appears to work while running in Docker on a SLES12 host.
Once I push the image to a repo and run it on another host, zypper ref -s no longer works (same error as yours). I'm basically stuck pre-installing all the base stuff before I publish the image.