Sending and receiving events successfully with socket.io, but nothing is happening - docker

I'm trying to get my webapp to send messages and I can't figure out why it isn't working. There are no errors that I can see, it's just that the actions in my event.py function aren't happening. I am running a gunicorn server with eventlet workers serving a flask app.
Here's the command that starts the gunicorn server through docker:
CMD [ "gunicorn", "--reload", "-b", "0.0.0.0:5000", "--worker-class", "eventlet", "-w", "1", "app:app"]
here's the relevant code on notes.html:
// Imports socketio
<script src="https://cdnjs.cloudflare.com/ajax/libs/socket.io/4.0.1/socket.io.js" integrity="sha512-q/dWJ3kcmjBLU4Qc47E4A9kTB4m3wuTY7vkFJDTZKjTs8jhyGQnaUrxa0Ytd0ssMZhbNua9hE+E7Qv1j+DyZwA==" crossorigin="anonymous"></script>
<script type="text/javascript" charset="utf-8">
// sets domain to talk to. (empty sets it to localhost)
const socket = io()
// send message to server on trigger from form.
socket.emit('send_new_session', new_session_form_id.value, new_session_form_number.value, new_session_form_title.value, new_session_form_synopsis.value)
console.log('send_new_session')
// console logs the message here, do I know it's getting this far. The problem seems to be that the server isn't getting the message for some reason.
events.py:
from . import db, socketio
from .classes import *
from flask_socketio import emit
#socketio.on('send_new_session')
def send_new_session(id, number, title, synopsis=None):
print("arrived!!!!!!!!!!!")
# more code that adds the new session the the database
..
I have the logging correctly set up to stdout but I never see the "arrived" message, so I know it's never hitting the server.
here is the server logs for when I send the message:
rest-server | Bpt-ydbpGYLF-HGKAAAC: Sending packet OPEN data {'sid': 'Bpt-ydbpGYLF-HGKAAAC', 'upgrades': ['websocket'], 'pingTimeout': 20000, 'pingInterval': 25000}
rest-server | Bpt-ydbpGYLF-HGKAAAC: Received packet MESSAGE data 0
rest-server | Bpt-ydbpGYLF-HGKAAAC: Sending packet MESSAGE data 0{"sid":"MRAxFiGYyLB3C6MBAAAD"}
rest-server | Bpt-ydbpGYLF-HGKAAAC: Received request to upgrade to websocket
rest-server | Bpt-ydbpGYLF-HGKAAAC: Upgrade to websocket successful
rest-server | Bpt-ydbpGYLF-HGKAAAC: Received packet MESSAGE data 2["send_new_session","1","2","foo","bar"]
rest-server | received event "send_new_session" from MRAxFiGYyLB3C6MBAAAD [/]
rest-server | Bpt-ydbpGYLF-HGKAAAC: Sending packet PING data None
rest-server | Bpt-ydbpGYLF-HGKAAAC: Received packet PONG data
you can see in the log that the message is in fact being sent and received, but for some reason the actions in the event aren't happening. I've been trying everything I can think of for a couple days not. any help would be greatly appreciated!!
Everything below here is probably not relevant, but if it it helps, this is how I set up the app:
file set up:
/app
--app.py
--requirements.txt
--Dockerfile
--docker-compose.yml
--.flaskenv
--/project
----/static
----/templates
----__init__.py
----settings.py
----events.py
----BONapp.py
----auth.py
etc...
settings.py
import os
from flask import Flask
app = Flask(__name__)
db_password = os.environ.get('DB_PASS')
app.config['SQLALCHEMY_DATABASE_URI'] = 'mysql+pymysql://root:' + db_password + '#bonmysqldb:3306/BON'
app.config['SQLALCHEMY_TRACK_MODIFICATIONS'] = False
app.config['SECRET_KEY'] = db_password
init.py
from flask_login import LoginManager
from flask_migrate import Migrate
from flask_sqlalchemy import SQLAlchemy
from flask_socketio import SocketIO
from .settings import app
db = SQLAlchemy(app)
socketio = SocketIO(app, logger=True, engineio_logger=True)
def create_app():
migrate = Migrate(app, db)
from .classes import Users
db.init_app(app)
socketio.init_app(app)
login_manager = LoginManager()
login_manager.login_view = 'auth.login'
login_manager.init_app(app)
# provide login_manager with a unicode user ID
#login_manager.user_loader
def load_user(user_id):
return Users.query.get(int(user_id))
# blueprint for auth routes of app
from .auth import auth as auth_blueprint
app.register_blueprint(auth_blueprint)
# blueprint for non-auth parts of app
from .BONapp import main as main_blueprint
app.register_blueprint(main_blueprint)
return app
app.py
from project.__init__ import create_app
app = create_app()

I figured it out while reading over my question again...
it was in events.py I changed:
from . import db, socketio
to:
from .__init__ import db, socketio
I'm not exactly sure why that mattered but it fixed it.
facepalm

Related

Django Channels / Daphne Internal Server Error: 'module' object is not callable

I'm trying to install a production server for Django Channels application I've been running locally with success. However, when starting Daphne and reaching to the application via browser, I get an Internal Server Error from Daphne. Console output is as follows:
2021-08-07 11:57:09,584 DEBUG HTTP b'GET' request for ['127.0.0.1', 33566]
2021-08-07 11:57:10,071 ERROR Exception inside application: 'module' object is not callable
Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/asgiref/compatibility.py", line 34, in new_application
instance = application(scope)
TypeError: 'module' object is not callable
2021-08-07 11:57:10,072 DEBUG HTTP 500 response started for ['127.0.0.1', 33566]
As the project/asgi.py might have some relevance here, it is below:
import os
import django
from django.core.asgi import get_asgi_application
os.environ['DJANGO_SETTINGS_MODULE'] = "proj.settings"
django.setup()
from channels.auth import AuthMiddlewareStack
from channels.routing import ProtocolTypeRouter, URLRouter
import app.routing
application = ProtocolTypeRouter({
"http": get_asgi_application(),
"websocket": AuthMiddlewareStack(
URLRouter(
app.routing.websocket_urlpatterns
)
),
})
However, I've been poking around with the said asgi.py, and I have a feeling that at least the application variable does not have anything to do with this, as that block could be commented out with no impact on the error message.
Relevant packages:
channels 3.0.4
daphne 3.0.2
Django 3.2.6
Any ideas?

Python KafkaConsumer not connecting

Setup:
I have 3 docker containers
1) For Kafka
2) For Zookeeper
3) For JupyterLab
I setup networking between these containers and I see that kafka producer is able to run and produce the data.
KafkaProducer.ipynb
KAFKA_BROKER = ['172.20.0.2:9093']
from kafka import KafkaProducer
from kafka.errors import KafkaError
producer = KafkaProducer(bootstrap_servers=KAFKA_BROKER)
for _ in range(100):
print("sending")
producer.send('my-topic', key=b'foo', value=b'bar')
print("success")
Here the send() sends message 100 times.
KafkaConsumer.ipynb
KAFKA_BROKER = ['172.20.0.2:9093']
from kafka import KafkaConsumer
consumer = KafkaConsumer('my-topic',group_id='my-group',bootstrap_servers=KAFKA_BROKER)
print("Comm success")
for message in consumer:
# message value and key are raw bytes -- decode if necessary!
# e.g., for unicode: `message.value.decode('utf-8')`
print ("%s:%d:%d: key=%s value=%s" % (message.topic, message.partition,
message.offset, message.key,
message.value))
In the above consumer code the line print("Comm success") never gets gets executed. Based on producer code execution, the network is open and jupyter is able to talk to kafka broker. But, client is not able to connect to the same broker for data consumption. How can I start debugging this?
By default auto.offset.reset value is latest, so set it to earliest with new group.id
consumer = KafkaConsumer('my-topic',group_id='new-group',auto_offset_reset = 'earliest',bootstrap_servers=KAFKA_BROKER)

How to retrieve worker logs for a Dask-YARN job?

I have a simple Dask-YARN script that does only one task: load a file from HDFS, as shown below. However, I'm running into a bug in the code, so I added a print statement in the function, but I don't see that statement being executed in the worker logs which I obtain using yarn logs -applicationId {application_id}. I even tried the method Client.get_worker_logs(), however that doesn't display the stdout as well, just shows some INFO about the worker(s). How does one obtain worker logs after the execution of the code has completed?
import sys
import numpy as np
import scipy.signal
import json
import dask
from dask.distributed import Client
from dask_yarn import YarnCluster
#dask.delayed
def load(input_file):
print("In call of Load...")
with open(input_file, "r") as fo:
data = json.load(fo)
return data
# Process input args
(_, filename) = sys.argv
dag_1 = {
'load-1': (load, filename)
}
print("Building tasks...")
tasks = dask.get(dag_1, 'load-1')
print("Creating YARN cluster now...")
cluster = YarnCluster()
print("Scaling YARN cluster now...")
cluster.scale(1)
print("Creating Client now...")
client = Client(cluster)
print("Getting logs..1")
print(client.get_worker_logs())
print("Doing Dask computations now...")
dask.compute(tasks)
print("Getting logs..2")
print(client.get_worker_logs())
print("Shutting down cluster now...")
cluster.shutdown()
I'm not sure what's going on here, print statements should (and usually do) end up in the log files stored by yarn.
If you want your debug statements to appear in the worker logs from get_worker_logs, you can use the worker logger directly:
from distributed.worker import logger
logger.info("This will show up in the worker logs")

Can't connect client to server in Python 3.6.4

Server Code:
import http.server
import socketserver
PORT = 8000
Handler = http.server.SimpleHTTPRequestHandler
with socketserver.TCPServer(("", PORT), Handler) as httpd:
print("serving at port", PORT)
httpd.serve_forever()
Client Code:
import http.client
conn = http.client.HTTPSConnection("localhost", 8000)
conn.request("HEAD","/index.html")
I get ssl.SSLError: [SSL: UNKNOWN_PROTOCOL] unknown protocol (_ssl.c:777)
error on client side, and code 400, message Bad request version on server side. I have no idea what's wrong with it.

Kubernetes/Spring Cloud Dataflow stream > spring.cloud.stream.bindings.output.destination is ignored by producer

I'm trying to run a "Hello, world" Spring Cloud Data Flow stream based on the very simple example explained at http://cloud.spring.io/spring-cloud-dataflow/. I'm able to create a simple source and sink and run it on my local SCDF server using Kafka, so until here everything is correct and messages are produced and consumed in the topic specified by SCDF.
Now, I'm trying to deploy it in my private cloud based on the instructions listed at http://docs.spring.io/spring-cloud-dataflow-server-kubernetes/docs/current-SNAPSHOT/reference/htmlsingle/#_getting_started. Using this deployment I'm able to deploy a simple "time | log" out-of-the-box stream with no problems, but my example fails since the producer is not writing in the topic specified when the pod is created (for instance, spring.cloud.stream.bindings.output.destination=ntest33.nites-source9) but in the topic "output". I have a similar problem with the sink component, which creates and expect messages in the topic "input".
I created the stream definition using the dashboard:
nsource1 | log
And container args for the source are:
--spring.cloud.stream.bindings.output.producer.requiredGroups=ntest34
--spring.cloud.stream.bindings.output.destination=ntest34.nsource1
Code snippet for source component is
package xxxx;
import java.text.SimpleDateFormat;
import java.util.Date;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.messaging.Source;
import org.springframework.context.annotation.Bean;
import org.springframework.integration.annotation.InboundChannelAdapter;
import org.springframework.integration.core.MessageSource;
import org.springframework.messaging.support.GenericMessage;
#SpringBootApplication
#EnableBinding(Source.class)
public class HelloNitesApplication
{
public static void main(String[] args)
{
SpringApplication.run(HelloNitesApplication.class, args);
}
#Bean
#InboundChannelAdapter(value = Source.OUTPUT)
public MessageSource<String> timerMessageSource()
{
return () -> new GenericMessage<>("Hello " + new SimpleDateFormat().format(new Date()));
}
And in the logs I can see clearly
2017-04-07T09:44:34.596842965Z 2017-04-07 09:44:34,593 INFO main o.s.i.c.DirectChannel:81 - Channel 'application.output' has 1 subscriber(s).
Question is, how to override properly the topic where messages must be produced/consumed or what attribute and values to use to make this work on k8s?
UPDATE: I have the similar problem using RabbitMQ
2017-04-07T12:56:40.435405177Z 2017-04-07 12:56:40.435 INFO 7 --- [ main] o.s.integration.channel.DirectChannel : Channel 'application.output' has 1 subscriber(s).
The problem was with my docker image. I still don't know the details but using the Dockerfile indicated at https://spring.io/guides/gs/spring-boot-docker/ instantiated 2 processes in the docker container, one with the parameters, and other without, which was the one with uptime and therefore being used.
The solution was to replace
ENTRYPOINT [ "sh", "-c", "java $JAVA_OPTS -Djava.security.egd=file:/dev/./urandom -jar /app.jar" ]
With
ENTRYPOINT [ "java", "-jar", "/app.jar" ]
And it started working. There must be a good reason why the example indicated the first entrypoint and why 2 processes were created, but the reason is still beyond my understanding.
Can you provide more details on how you set that configuration property? That feature is pretty basic, so this should work. If you are using a stream definition to set it, please update your question with the stream definition.
The channel name remains 'output' because that's what the application uses internally.

Resources