I am trying to run the containernet_example.py file (where I modified the 2 docker image hosts with my docker images) with ONOS as the controller for my topology.
When I accessed the ONOS UI page via localhost:8181/onos/ui/login.html I was not able to access the hosts, i.e. docker images in the UI page. I mean the topology is not displayed in the onos page but in the containernet CLI, the ping works for the hosts. The command I use is:
sudo mn --controller remote,ip=MYIPADDRESS --switch=ovsk,protocols=OpenFlow13 --custom containernet_example.py
Whereas if I try standard topologies like tree, I am able to access those topologies. I want to use those docker images as hosts in onos gui and as well as in containernet cli.
I have been reading so many posts but I could not solve this issue. Any insight would be helpful. Thanks in advance.
The code used for this has been acquired from another StackOverflow link, which I could not tag as I could not bookmark the page exactly.
Below is the code that worked for me in the case of 2 docker images as containernet hosts.
from mininet.net import Containernet
from mininet.node import Controller, OVSKernelSwitch, RemoteController
from mininet.cli import CLI
from mininet.link import TCLink
from mininet.log import info, setLogLevel
setLogLevel('info')
net = Containernet(controller=RemoteController, switch=OVSKernelSwitch) #remote controller
info('*** Adding controller\n')
net.addController('c0', controller=RemoteController, ip= 'MYIPADDRESS', port= 6653)
info('*** Adding docker containers\n')
d1 = net.addDocker('d1', ip='10.0.0.251', dimage="myimage1:myimagetag")
d2 = net.addDocker('d2', ip='10.0.0.252', dimage="myimage2:myimagetag")
info('*** Adding switches\n')
s1 = net.addSwitch('s1', protocols= "OpenFlow13") #mentioning protocol
info('*** Creating links\n')
net.addLink(d1, s1)
net.addLink(d2, s1)
info('*** Starting network\n')
net.start()
info('*** Testing connectivity\n')
net.ping([d1, d2,])
info('*** Running CLI\n')
CLI(net)
info('*** Stopping network')
net.stop()
And the command I used is simple sudo python3 myfilename.py
Related
Working on Windows 10, I downloaded the simple Quarkus sample prototype.
I run it normally and be able to access http://localhost:8118/hello, except if I try to run it from the native image executable for GraalVM.
I have to say that I don't have GraalVM installed, but I'm trying to do it from a Docker container (called contenedor-graalvm-1) based on the following GraalVM image:
container-registry.oracle.com/graalvm/community:ol8-java17-22.3.0-b1
The sequence I follow is:
docker start contenedor-graalvm-1
docker exec -it contenedor-graalvm-1 bash
cd code-with-quarkus-one/target
And then, successive launch attempts:
A)
./code-with-quarkus-one-1.0.0-SNAPSHOT-runner -Dquarkus.http.host=0.0.0.0 -Dquarkus.http.port=8118
B)
./code-with-quarkus-one-1.0.0-SNAPSHOT-runner -Dquarkus.http.host=192.168.49.147 -Dquarkus.http.port=8118
C)
./code-with-quarkus-one-1.0.0-SNAPSHOT-runner -Dquarkus.http.host=127.0.0.1 -Dquarkus.http.port=8118
D)
./code-with-quarkus-one-1.0.0-SNAPSHOT-runner -Dquarkus.http.host=localhost -Dquarkus.http.port=8118
That seems to start properly (except -B- option):
But none of them allow me to access to the desired endpoint:
Any help will be appreciatted.
Hi i am trying to load into cassandra in docker. Unfortunately, i can't make it. I pretty sure the path is correct, as i directly copy and paste from the properties section. May i know is there any alternative to solve it?
p.s. i am using windows 11, latest cassandra 4.1
cqlsh:cds502_project> COPY data (id)
... FROM 'D:\USM\Data Science\CDS 502 Big data storage and management\Assignment\Project\forest area by state.csv'
... WITH HEADER = TRUE;
Using 7 child processes
Starting copy of cds502_project.data with columns [id].
Failed to import 0 rows: OSError - Can't open 'D:\\USM\\Data Science\\CDS 502 Big data storage and management\\Assignment\\Project\\forest area by state.csv' for reading: no matching file found, given up after 1 attempts
Processed: 0 rows; Rate: 0 rows/s; Avg. rate: 0 rows/s
0 rows imported from 0 files in 0.246 seconds (0 skipped).
above is my code and the result. I have tried https://www.geeksforgeeks.org/export-and-import-data-in-cassandra/ exactly and it works when i create the data inside the docker, export and reimport it, but not working when i use external data.
I also notice the csv file i exported using cassandra in docker is missing in my laptop but can be access by docker.
Behaviour you are observing is what is expected from docker. What I know there are cp commands in Kubernetes which copy the data from outside to inside container and vice-versa. Either you can check those commands to take the data from inside or outside the docker or other way is you can push your csv into docker using a Docker Image.
you need to leverage Docker bind mounting a volume in order to access local files within your container. docker run -it -v <path> ...
See references below:
https://www.digitalocean.com/community/tutorials/how-to-share-data-between-the-docker-container-and-the-host
https://www.docker.com/blog/file-sharing-with-docker-desktop/
I am trying to push a docker image on Google Cloud Platform container registry to define a custom training job directly inside a notebook.
After having prepared the correct Dockerfile and the URI where to push the image that contains my train.py script, I try to push the image directly in a notebook cell.
The exact command I try to execute is: !docker build ./ -t $IMAGE_URI, where IMAGE_URI is the environmental variable previously defined. However I try to run this command I get the error: /bin/bash: docker: command not found. I also tried to execute it with the magic cell %%bash, importing the subprocess library and also execute the command stored in a .sh file.
Unfortunately none of the above solutions work, they all return the same command not found error with code 127.
If instead I run the command from a bash present in the Jupyterlab it works fine as expected.
Is there any workaround to make the push execute inside the jupyter notebook? I was trying to keep the whole custom training process inside the same notebook.
If you follow this guide to create a user-managed notebook from Vertex AI workbench and select Python 3, then it comes with Docker available.
So you will be able to use Docker commands such as ! docker build . inside the user-managed notebook.
Example:
I just downloaded this docker image to set up a spark cluster with two worker nodes. Cluster is up and running however I want to submit my scala file to this cluster. I am not able to start spark-shell in this.
When I was using another docker image, I was able to start it using spark-shell.
Can someone please explain if I need to install scala separately in the image or there is a different way to start
UPDATE
Here is the error bash: spark-shell: command not found
bash: spark-shell: command not found
root#a7b0682ff17d:/opt/spark# ls /home/shangupta/Scripts/
ProfileData.json demo.scala queries.scala
TestDataGeneration.sql input.scala
root#a7b0682ff17d:/opt/spark# spark-shell /home/shangupta/Scripts/input.scala
bash: spark-shell: command not found
root#a7b0682ff17d:/opt/spark#
You're getting command not found because PATH isn't correctly established
Use the absolute path /opt/spark/bin/spark-shell
Also, I'd suggest packaging your Scala project as an uber jar to submit unless you have no external dependencies or like to add --packages/--jars manually
My goal:
I have a built docker image and want to run all my Flows on that image.
Currently:
I have the following task which is running on a Local Dask Executor.
The server on which the agent is running is a different python environment from the one needed to execute my_task - hence the need to run inside a pre-build image.
My question is: How do I run this Flow on a Dask Executor such that it runs on the docker image I provide (as environment)?
import prefect
from prefect import task, Flow
from prefect.engine.executors import LocalDaskExecutor
from prefect.environments import LocalEnvironment
#task
def hello_task():
logger = prefect.context.get("logger")
logger.info("Hello, Docker!")
with Flow("My Flow") as flow:
results = hello_task()
flow.environment = LocalEnvironment(
labels=[], executor=LocalDaskExecutor(scheduler="threads", num_workers=2),
)
I thought that I need to start the server and the agent on that docker image first (as discussed here), but I guess there can be a way to simply run the Flow on a provided image.
Update 1
Following this tutorial, I tried the following:
import prefect
from prefect import task, Flow
from prefect.engine.executors import LocalDaskExecutor
from prefect.environments import LocalEnvironment
from prefect.environments.storage import Docker
#task
def hello_task():
logger = prefect.context.get("logger")
logger.info("Hello, Docker!")
with Flow("My Flow") as flow:
results = hello_task()
flow.storage = Docker(registry_url='registry.gitlab.com/my-repo/image-library')
flow.environment = LocalEnvironment(
labels=[], executor=LocalDaskExecutor(scheduler="threads", num_workers=2),
)
flow.register(project_name="testing")
But this created an image which it then uploaded to the registry_url provided. Afterwards when I tried to run the registered task, it pulled the newly created image and the task is stuck in status Submitted for execution for minutes now.
I don't understand why it pushed an image and then pulled it? Instead I already have an image build on this registry, I'd like to specify an image which should be used for task execution.
The way i ended up achieve this is as follows:
Run prefect server start on the server (i.e. not inside docker).
Apparently docker-compose in docker is not a good idea.
Run prefect agent start inside the docker image
Make sure the flows are accessible by the docker image (i.e. by mounting a shared volume between the image and the server for
example)
You can see the source of my answer here.