I am writing an application using Asterisk-Java. It is designed to run on a server that also runs Asterisk. So far, so good.
My application, that originates calls (using the AMI) and that manages user input (using Asterisk-Java's FastAGI and an embedded AgiServer) works great on both my development server and the production server.
For deployment purposes, I am now asked to create a Docker container that would pack up Asterisk and my application, so that it could be easily deployed to other places without having to go through installations and configurations.
The thing is, my application does not behave the same way in the Docker container: on the development / production servers, using the getData function, I can get a DTMF code; on the Docker container, getData seems to never receive the DTMF data from Asterisk (I can stream a file, but the function eventually times out, which means it did not get anything).
I first though of an unexposed port, but since this communication problem seems to be between the AGI Server and Asterisk, which are both running in the container, I find it hard to believe.
I have no other idea, please suggest.
Check out the dtmfmode Parameter for your SIP-Peer...
If your are using RFC2833 (DTMF via RTP), unexposed media ports could very well be the reason.
You could try to optimize your port settings (could be a lot of ports!).
Or try to use DMTF via SIP-Info as an alternative.
But that wouldnt fix any media problems...
Related
I am using Docker Compose, which will run on a Linux tablet in production. I have a container serving up a web GUI. The user will click a "print" button in the GUI, which will result in some kind of request (probably HTTP to Flask in another container, which will maybe forward it to some other container), and that request will result in some data being sent to the printer.
My first step, I can only imagine, is to be able to send data to the printer from inside a Docker container. Any Docker container. I can then use that knowledge, of how to send something to the printer from Docker, to incorporate the printing into my system.
So, that's the infrastructure I'm working with. It can be simplified as simply "I want to print to a printer from a Docker container." I'm working on a Mac, and I can print from the Mac using lp. So I know the connection to the printer is working.
I've tried a few containers, including olbat/cupsd. lpstat -r pretty much always says the Scheduler is running, but lpstat -v always shows that no destinations are set up.
My DevOps guy and I have been banging our heads against the wall all day on this. There are various articles and repos about setting of CUPS in Docker, but they all have holes somewhere, where they say "Use the fooglesplatter to connect to the printer" without telling you what a fooglesplatter is. Or (for a more concrete example) they'll talk about how you set up the CUPS dashboard to add your printer on your local machine, and then say "Voila! You can print!" without telling you what to do in the container. Or they'll refer to a conf file that doesn't exist on my machine. Or something else that leaves us completely baffled.
Can someone who has accomplished this please post (or direct me to) a step-by-step guide that basically treats me like I've never touched a computer before? That assumes no knowledge whatsoever and spells out every step? We are wise Docker users, and my DevOps guy is a much smarter guy than I am, but we are both at a loss.
I know this is a crazy request. Maybe it's not an SO appropriate question. Close it if you must. But we are incredibly stuck and I really hope someone can help us.
I was wondering if it is possible to offer Docker images, but not allow any access to the internals of the built containers. Basically, the user of the container images can use the services they provide, but can't dig into any of the code within the containers.
Call it a way to obfuscate the source code, but also offer a service (the software) to someone on the basis of the container, instead of offering the software itself. Something like "Container as a Service", but with the main advantage that the developer can use these container(s) for local development too, but with no access to the underlying code within the containers.
My first thinking is, the controller of the Docker instances controls everything down to root access. So no, it isn't possible. But, I am new to Docker and am not aware of all of its possibilities.
Is this idea in any way possible?
An obfuscation-based only solution would not be enough, as "Encrypted and secure docker containers" details.
You would need full control of the host your containers are running in order to prevent any "poking". And that is not the case in your scenario, where a developer does have access to the host (ie his/her local development machine) where said container would run.
What is done sometimes is to have some piece of "core" code to run on a remote location (remote server, usb device), in a way that the external piece of code on the one hand can do some client authentication but also and more importantly run some business core code in order to guarantee that the externally located code "has" to be executed to have the things done. If it were only some check that is not actually core code, a cracker could just override it and avoid calling it on the client side. But if the code is actually required to be run and its not then the software won't be able to finish its processing. Of course there is an overhead for all of this, both in complexity and probably computation times, but that's one way you could deploy something that will unfailingly be required to contact your server/external device.
Regards,
Eduardo
I am presently working on a client-server solution to transfer files to another machine via a socket network connection. Since I intend to do some evaluation on the receiving end as well I am assuming that I will need to have some kind of client or server programme running there, too.
I am fairly new to the whole client-server thing and therefore have the following elementary question:
My present understanding is that client and server will be two independent programmes running on two different machines. How would one typically ensure that the communication partner (i.e., the server when sending from a client and the client when sending from a server) is actually up and running on the remote machine that I want to transfer a file to?
So far, I have been looking into the following options:
In the sending programme include an ssh access to the remote
machine and start an instance of the receiving programme on the
remote machine.
Have the receiving programme run as a demon process on the remote
machine. This would mean that the receiving programme should always
be running on the remote machine. However, how would I know whether
the process has crashed or has been shut down for some reason and
how would one recover from that without option 1) above?
So, my main question is: Are there any additional options that might be worth considering?
Thanks for your view on this!
Depending on how your client server messages are setup, a ping (I don't mean the ICMP ping, but the basic idea) message, where the server can respond with "I am alive" would help. This way at least you know the server end is running.
It is not uncommon in production environments using these that monitoring systems are put in place. Other options worth considering - xinet.d scripts - stuff that gets started on incoming connections.
There probably new ways to achieve the automatic start/restart or start on connection of this with systemd/systemctl but I am not familiar enough with them to give you the specifics.
A somewhat crude, but effective means may be a cron job that periodically runs a script to enforce keeping the service up.
So I have been looking every where, and so far i haven't been able to find anything that allows me to ssh from an iPhone app, and have finally resorted to posting a new pos.
So I am trying to make an app to manage servers and part of the tasks that I need to be able to do it to be able to some how remotely connect over the internet to a server with either an ip address or a DNS name.
The connection to the server does not necessarily need to be a SSH connection, it could be a telnet although because of the security issues i would prefer SSH (if it is a lot less code I would accept telnet), but on the other hand it could be some other type of connection.
The application just needs to be able to run a script on the server end and if a SSH or telnet I would not need any help but if some other type of connection i may need a bit of help. Also the server on the other end is intended to be linux server (either ubuntu or gentoo, but not sure which yet but all i can say is will almost certainly be a linux server operating system).
I have already looked at the libssh/2 and would welcome any other similar demos as have not been able to work out how get the frameworks to work as well as licensing issues with using the frameworks in it.
PS. I am relatively new to programming and although i have some basic knowledge of coding some type of tutorial or sample code would be greatly appreciated.
Many Thanks For Any Help
Thomas
SSH is a hugely complicated beast. As long as you only need to execute one command without interactivity, it sounds like you could achieve the same thing by running a web server on the server and posting the commands via HTTP from the device. You can use SSL to achieve security. You'll need a mechanism that allows you to authenticate the device (you'd need something with ssh, too). And you'll have to have something in the web server on the server that figures out and runs the desired script. But all that is still hugely easier than dealing with libssh.
I have created a Datasnap service, using Bob Swart's white paper as a guide. I have been debugging and deployed succesfully using the VCL Forms application as a server. But when I try to deploy the service version, it installs ok, I then try to start the service and it immediately stops. The error in the event log would suggest that the port set is already in use, I have tried different port numbers for both the TCPServerTransport and the HTTPService without any joy. The DSServer is not set to Autostart as I want to set the Port number from a configuration file. The error message displayed in the event log is:
Service failed on start: Could not bind socket. Address and port are already in use..
I have also tried writing to a log file on start up and execute but it looks as if it is not getting this far.
Solution needed asap, before I have to revert back to a thick client which I do not really want to do.
Thanks
Firstly get a copy of TCPView from the Sysinternals suite (now run by Microsoft) and use it to monitor which app is using the port you want to use.
I would hazard a guess that if the app works fine as a stand alone (as you say it does) and you are trying to use the same port in the service then perhaps the service app is opening up the port at startup without you realizing it and then when you try to open the port manually the app finds it already in use. Or somehow the app is trying to open the port twice. The first time is successful but, maybe due to an event or an unexpected code path, the app tries to open it a second time and fails. TCPView will help spot this.
If you are sure that the port you have configured is actually free and not in use by any other software on the machine, then there might be some anti-virus / security software running that is blocking all software from listening on either specific ports or on any port except a few configured ones. The message you are getting could be one of the symptoms of how the anti-virus / security software handles attempts by apps to start listening on a port.