Mac:homebrew not installing all the Kafka scripts - homebrew

I have installed kafka-0.8.2.2 using homebrew as mentioned in the below link
https://dtflaneur.wordpress.com/2015/10/05/installing-kafka-on-mac-osx/
i have configured only 1 broker..
Able to run zookeeper,kafka ,consumer,producer and publish messages using producer console and see messages in consumer console...
but when im trying to run below script im seeing this script is missing..
kafka-consumer-groups.sh
basically i want to see the
consumer group list
number of partitions available (would like to know the number of partitions present in broker)
and when i ran echo $KAFKA_HOME it gives empty i mean no value...
but when i ran brew install kafka
Warning: kafka-0.8.2.2 already installed
im concerened if kafka is installed properly..please suggest
able to see below scripts under usr/local/bin/
kafka-console-consumer.sh
kafka-console-producer.sh
kafka-consumer-offset-checker.sh
kafka-consumer-perf-test.sh
kafka-mirror-maker.sh
kafka-preferred-replica-election.sh
kafka-producer-perf-test.sh
kafka-reassign-partitions.sh
kafka-replay-log-producer.sh
kafka-replica-verification.sh
kafka-run-class.sh
kafka-server-start.sh
kafka-server-stop.sh
kafka-simple-consumer-shell.sh
kafka-topics.sh
not sure if any other scripts were than which i pointed out...

kafka-consumer-groups.sh is only available from 0.9.
Each topic (not broker) consists of partitions, so to check out partitions of a topic you can use:
/bin/kafka-topics.sh --zookeeper localhost:2181 --describe --topic your_topic_name
I don't know about brew installation of Kafka, since I'm using Docker, where you have a single command installation and you don't mess with your local file system. I wholeheartedly suggest that you try that (I can provide you with more details about that if you choose to go the Docker way).

After brew installing, you'll have some print out akin to this:
To restart kafka after an upgrade:
brew services restart kafka
Or, if you don't want/need a background service you can just run:
/opt/homebrew/opt/kafka/bin/kafka-server-start /opt/homebrew/etc/kafka/server.properties
You can then run /opt/homebrew/opt/kafka/bin/kafka-consumer-groups

Related

Connecting to MariaDB from docker-compose Airflow - No module named 'MySQLdb' error

I've modified the docker-compose for Airflow (apache/airflow:2.5.0-python3.10) and added in a MariaDB service: to emulate the target DB for a project in dev.
In the _PIP_ADDITIONAL_REQUIREMENTS I've included pymsql and I am attempting to create a MySQL Connection under Admin | Connections and using the MySQL connector to bridge over to Maria (before asking I used host.docker.internal as the host name in the Connections| Add Connection field for host).
apache-airflow-providers-mysql is already installed as part of the official docker compose for Airflow (and I checked it manually on the container.)
However, every time I hit the Test button I get a No module named 'MySQLdb' which I assumed the pip extras install took care of.
Also, I am assuming here I can merely use MySQL over the wire to connect to the mariadb via python but if there additional libraries or installs needed (for example, libmariadb3 libmariadb-dev python3-mysqldb or similar to be installed, please let me know. I assume someone else has had to do this already, though perhaps not from docker... =] I am trying to avoid building my own image and use the existing docker-compose, but if it's unavoidable and I need to do a build ., lemme know.).
I am concerned this may be due to the fact I am on an M1 Mac laptop running this (as 2 requirements, I've now dug down and researched have a platform_machine != "aarch64" against them 8-/ Surely there is a way around this though, yeah? Esp with docker? (do I need to do a custom build?). Also, do not understand why these particular components would not work on arm64 at this point that macs have been chipped like that over 2 years.
thanks!
PS> Maria is not supported as a backend in Airflow but Connections to it as a database should work much like anything else that can connect via sqlalchemy etc.
Set up the Connection in Airflow and then installed the extra pip for pymysql. However, I still seem to be getting the same No module named 'MySQLdb' error on test connection.
You need to install the MySql provider for Airflow:
pip install apache-airflow-providers-mysql
or add it to your requirements file, it should install all what you need to use MySql/MariaDB as connection.
The issue appears to be one where Airflow's MySQL operators have a != aarch64 for the Apple OSX and my machine was an M1 chipped laptop.
How did I get around this? In my docker-compose for the airflow components I set a directive of platform: x86_64 in the service(s) and then rebuild the container. After that, I had no problem connecting with the MySQL connector to the MariaDB that we had as a target.
note: this greatly increased the startup time for the docker compose instance (from like 20s to 90s-ish, but works). As a solution for dev, worked great.

Any reason not to remove Docker as default WSL distro?

I have the latest version of Windows Subsystem for Linux (WSL) installed, but when I run wsl from the command line I get the following error:
An error occurred mounting one of your file systems. Please run ‘dmesg’ for more details.
I hunted down a few possible explanations and one is the possibility that Docker is set as my default wsl. Sure enough, when I run wsl -l -v, the response is:
NAME | STATE | VERSION
* docker-desktop-data | Stopped | 2
docker-desktop | Stopped | 2
To correct this, I am told to change the default from Docker to something else with the command wsl -s distroName where “distroName” should be changed to … something.
So I have two questions:
What should I type instead of “distroName”?
Will Docker Desktop still perform as intended if I do this?
If you're not using WSL for anything other than Docker Desktop, then it really doesn't matter.
But since you attempted to run wsl, it sounds like you may want to try it out. In that case, you should definitely install a WSL distribution that is meant for interactive use.
The docker-desktop and docker-desktop-data distributions aren't really intended to be accessed directly by the user, but one of them will get set as default if you don't have any other distribution installed. There's a proposal to have a way for Docker to set these as "hidden" so that WSL wouldn't automatically set either as default.
What should I type instead of “distroName”?
First, install a user distribution. Which one you choose is up to you, but I typically recommend Ubuntu for first-time installs. When installing WSL on recent Windows releases, it should be the default.
It's installable via the Microsoft Store -- Just find the one that says "Ubuntu" (with no version number). It will install the latest release.
Then, in PowerShell;
wsl --set-default Ubuntu
Will Docker Desktop still perform as intended if I do this?
Absolutely -- It's really an error/corner-case/oversight that docker-desktop-data ever gets set as default anyway, which is why it's nice that the WSL team is considering a method to prevent this.

First run of Docker -- Running makeitopen.com's F8 App

I'm reading through makeitopen.com and want to run the F8 app.
The instructions say to install the following dependencies:
Yarn
Watchman
Docker
Docker Compose
I've run brew install on all of these, and none appeared to indicate that any of them had already been installed. I have not done any config or setup or anything on any of these new packages.
The next step is to run yarn server and here's what I got from that:
$ docker-compose up
ERROR: Couldn't connect to Docker daemon at http+docker://localhost - is it running?
If it's at a non-standard location, specify the URL with the DOCKER_HOST environment variable.
error Command failed with exit code 1.
Not having any experience with any of these packages, I don't really know what to do (googling brings up so many different scenarios). What do I do now?
PS. Usually when I work with React Native I run npm start to start the expo-ready app, but the F8 project doesn't respond to npm start.
UPDATE (sort of):
I ran docker-compose up which appeared to run all the docker tasks, and I'm assuming the server is running (although I haven't tried yarn server again).
I continued with the instructions, installing dependencies with yarn (which did appear to throw some errors. quite a few, actually, but also a lot of success).
I then ran yarn ios, and after I put the Facebook SDK in the right folder on my computer, the XCode project opened.
The Xcode build failed. Surprise, right? It did make it through a lot of the tasks. But it can't find FBSDKShareKit/FBSDKShareKit.h (although that file does appear to exist in FBSDKShareKit/Headers/)
Any thoughts? Is there any way in the world I can just run this in expo?
If docker and docker-compose are installed properly, you either need root priviledges or use the docker group to add yourself:
usermod -aG docker your-username
Keep in mind, that all members of the docker usergroup de facto have root access on the host system. Its recommended to only add trusted users and keep precautionary measures to avoid abuse, but this is another topic.
When docker is not working properly, check if it's daemon is running and maybe restart the service:
# systemctl status docker
● docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled)
Active: active (running) since Thu 2019-02-28 19:41:47 CET; 3 weeks 3 days ago
Then create the container again using docker-compose up.
Why a simple npm start doesn't work
The package.json file shows that those script exists, but it runs npm start. Looking at the docker-compose.yml file, we see that it creates 5 containers for it's mongo database as well as grapql and the frontend/backend. Without docker, it wouldn't be possible to set up a lot of services that fast. You'd need to install and configure them manually.
At the end your system may be messed up with software, when playing around with different software or developing for multiple open source projects. Docker is a great way to deploy modern applications with keeping them flexible and separated. It's worth to get started with those technology.

Can I Install Docker Over cPanel?

Can I install Docker over a server with pre-installed cPanel and CentOS 7? Since I am not aware of Docker, I am not completely sure whether it will mess with cPanel or not. I already have a server with CentOS 7 and cPanel configured. I want to know if I can install Docker over this configuration I mentioned without messing up?
Yes you can install docker over cPanel/WHM just like installing it on any other CentOS server/virtual machine.
Just follow these simple steps (as root):
1) yum install -y yum-utils device-mapper-persistent-data lvm2 (these should be already installed...)
2) yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
3) yum install docker-ce
4) enable docker at boot (systemctl enable docker)
5) start docker service (systemctl start docker)
The guide above is for CentOS 7.x. Don't expect to find any references or options related to Docker in the WHM interface. You will be able to control docker via command line from a SSH shell.
I have some docker containers already running on my cPanel/WHM server and I have no issues with them. I basically use them for caching, proxying and other similar stuff.
And as long as you follow these instructions, you won't mess-up any of your cPanel/WHM services/settings or current cPanel accounts/settings/sites/emails etc.
Not sure why you haven't tried this already!
I've been doing research and working on getting Docker working on cPanel. It's not just getting it to work on a CentOS 7 box but rather making it palatable for the cPanel crowd in the form of a plugin. So far I can confirm that it's absolutely doable. Here's what I've accomplished and how:
Integrate Docker Compose with cPanel (which is somewhat a step
further from WHM)
Leverage the user-namespace kernel feature in Linux so Docker
services can't escalate their privileges (see userns remap)
Leverage Docker Compose so users can build complex services and
start ready apps from the store with a click
Make sure services starting via Docker run on a non-public IP on the
server. Everything gets routed via ProxyPass
cPanel has been gracious to provide a Slack channel for people to discuss this upcoming plugin. I'd be more than happy to invite you if you'd like to be kept updated or to contribute. Let me know!
FYI, there's more info here on https://www.unixy.net/docker if you're interested. Please note that this plugin is in private beta but more than happy to let people use it!
Yes you could, in fact someone else has done it already: https://github.com/mirhosting/cPanel-docker

install redis as windows service

I've just installed redis on windows with MSOpenTech port. Everything is fine but the windows service. In order to run cmd, I need to create Redis command line arguments which I don't know how to achieve.
How can I solve this problem?
This is the instruction:
Running Redis as a Service
In order to better integrate with the Windows Services model, new
command line arguments have been introduced to Redis. These service
arguments require an elevated user context in order to connect to the
service control manager. If these commands are invoked from a
non-elevated context, Redis will attempt to create an elevated context
in which to execute these commands. This will cause a User Account
Control dialog to be displayed by Windows and may require
Administrative user credentials in order to proceed.
Installing the Service
--service-install
This must be the first argument on the redis-server command line.
Arguments after this are passed in the order they occur to Redis when
the service is launched. The service will be configured as Autostart
and will be launched as "NT AUTHORITY\NetworkService". Upon successful
installation a success message will be displayed and Redis will exit.
This command does not start the service.
For instance:
redis-server --service-install redis.windows.conf --loglevel verbose
Uninstalling the Service
--service-uninstall
In dir where you installed redis instead of
redis-server --service-install redis.windows.conf--loglevel verbose
do
redis-server --service-install redis.windows.conf --loglevel verbose
(i.e. Add a space before "--loglevel")
Similar to starting redis from command line, before installing the service you will need to specify the maxheap parameter. Open the redis.windows.conf file and find the line which comments out maxheap; specify a suitable size in bytes.
Then run
redis-server --service-install redis.windows.conf --loglevel verbose
You will need to manually start the service after you install it or just restart windows.
The simplest way is,
run command prompt as an administrator and than open redis directory and write
redis-server --service-install redis.windows.conf --loglevel verbose
the service will successfully installed.
For me as mentioned here, Redis doesn't start as windows service on Windows7
by installing the service with --service-name parameter runs the service magically without any issue.
The Microsoft Redis Open Tech project has been abandoned and no longer supported.
Memurai is a Windows port of Redis that derives from that Open Tech project (see here). It is actively maintained and supported.
Take a look.

Resources