I am using the following code to read each frame from an HLS stream (I invoke init_camera, in the following, with an HLS URL).
I simply call read_camera_frame in a while loop (as fast as I can, since I believe the VideoCapture read should block and return frames at a rate that corresponds to the FPS of the video stream).
def init_camera(camera_id):
return cv2.VideoCapture(camera_id)
self.camera_cap = init_camera(self.image_info.get_camera_id())
def read_camera_frame(self):
syst = time.time_ns()
time_since_last_pub = (syst - self.last_pub_time)/1000000000
time_since_last_stat = (syst - self.last_stat_time)/1000000000
if time_since_last_stat > self.stat_report_interval:
fps = self.frames_collected_since_last_report/self.stat_report_interval
self.logger.info(f"Total Frames: {self.frame_cnt}"
f"Total Discards: {self.frame_discard_cnt}"
f" Frames Since Last Report: {self.frames_collected_since_last_report} "
f" FPS: {fps} "
)
self.frames_collected_since_last_report = 0
self.last_stat_time = syst
self.logger.info(f"CameraReader: read a frame {self.frame_cnt}")
ret, img = self.camera_cap.read()
if ret:
self.frame_cnt += 1
self.frames_collected_since_last_report += 1
ts = self.min_frame_pub_interval - time_since_last_pub
if ts > 0:
self.frame_discard_cnt += 1
return []
self.last_pub_time = syst
return [(img, [copy.deepcopy(self.image_info)])]
raise CameraReaderException("Failed To Read Frame")
The FPS for the video I am playing is just about 30.
fps = self.camera_cap.get(cv2.CAP_PROP_FPS)
self.logger.info(f"Source FPS {fps}")
yet I see frames read at around 130 per second.
Why is the VideoCapture read returning frames roughly 4 times faster than I expect?
I thought the VideoCapture read would read frames at the FPS for the video.
In games like phantom forces, or any FPS for that matter, if you look up or down, the arms and tools will stay on screen. In a new Roblox studio project, this does not happen by default. Basically I want the arms and tools to follow the camera’s rotation.
This can be done, but do you want other players to see the player turn the gun towards the camera?
local Camera = workspace.CurrentCamera
local Player = game.Players.LocalPlayer
local Character = workspace:WaitForChild(Player.Name)
local Root = Character:WaitForChild("HumanoidRootPart")
local Neck = Character:WaitForChild("UpperTorso"):FindFirstChildOfClass("Motor6D")
local YOffset = Neck.C0.Y
local CFNew, CFAng = CFrame.new, CFrame.Angles
local asin = math.asin
game:GetService("RunService").RenderStepped:Connect(function()
local CameraDirection = Root.CFrame:toObjectSpace(Camera.CFrame).lookVector
if Neck then
if Character.Humanoid.RigType == Enum.HumanoidRigType.R15 then
Neck.C0 = CFNew(0, YOffset, 0) * CFAng(0, -asin(CameraDirection.x), 0) * CFAng(asin(CameraDirection.y), 0, 0)
elseif Character.Humanoid.RigType == Enum.HumanoidRigType.R6 then
Neck.C0 = CFNew(0, YOffset, 0) * CFAng(3 * math.pi/2, 0, math.pi) * CFAng(0, 0, -asin(CameraDirection.x)) * CFAng(-asin(CameraDirection.y), 0, 0)
end
end
end)
This example only works with R15
If you don't want the players to see this, then create a model of the gun from the client's side and stick it on the camera
local Camera = workspace.CurrentCamera
local Player = game.Players.LocalPlayer
local Character = workspace:WaitForChild(Player.Name)
local Root = Character:WaitForChild("HumanoidRootPart")
NAMES = {
screen_gun = "Local_Gun",
model= "Gun",
view = "view"
}
--- For Player
local Gun = {
screen = Instance.new("ScreenGui",Player:FindFirstChildOfClass("PlayerGui")),
obj=Instance.new("ViewportFrame",Player:FindFirstChildOfClass("PlayerGui"):WaitForChild("ScreenGui")),
part =Instance.new("Part",Player:WaitForChild("PlayerGui"):WaitForChild("ScreenGui"):WaitForChild("ViewportFrame")),
mesh = Instance.new("SpecialMesh",Player:WaitForChild("PlayerGui"):WaitForChild("ScreenGui"):WaitForChild("ViewportFrame"):WaitForChild("Part")),
offset = UDim2.new(0.7,0,0.6,0),
cam = Instance.new("Camera",Player:WaitForChild("PlayerGui"):WaitForChild("ScreenGui"):WaitForChild("ViewportFrame")),
offset2 = CFrame.new(Vector3.new(1,1,1),Vector3.new(0,0,0)),
size_view = UDim2.new(0,300,0,300)
}
Gun.obj.CurrentCamera=Gun.cam
Gun.part.Position = Vector3.new(0,0,0)
Gun.obj.Position = Gun.offset
Gun.obj.Size = Gun.size_view
Gun.obj.BackgroundTransparency = 1
Gun.cam.CFrame = Gun.offset2
Gun.screen.Name = NAMES.screen_gun
Gun.part.Name = NAMES.model
Gun.obj.Name = NAMES.view
Gun.part.Size = Vector3.new(1,1,2)
--
Gun.obj.Visible = false
local ToolInHand = false
Character.ChildAdded:Connect(function(obj)
if obj:IsA("Tool") and ( obj:FindFirstChildOfClass("Part") or obj:FindFirstChildOfClass("MeshPart") ) then --
if obj:FindFirstChildOfClass("MeshPart") then
obj:FindFirstChildOfClass("MeshPart").LocalTransparencyModifier = 1
Gun.mesh.MeshId = obj:FindFirstChildOfClass("MeshPart").MeshId
elseif obj:FindFirstChildOfClass("Part") then
obj:FindFirstChildOfClass("Part").LocalTransparencyModifier = 1
end
Gun.obj.Visible = true
ToolInHand = true
end
end)
Character.ChildRemoved:Connect(function(obj)
if obj:IsA("Tool") and ( obj:FindFirstChildOfClass("Part") or obj:FindFirstChildOfClass("MeshPart") ) then
if obj:FindFirstChildOfClass("MeshPart") then
obj:FindFirstChildOfClass("MeshPart").LocalTransparencyModifier = 0
elseif obj:FindFirstChildOfClass("Part") then
obj:FindFirstChildOfClass("Part").LocalTransparencyModifier = 0
end
Gun.obj.Visible = false
ToolInHand = false
end
end)
İ try to use tensorflow image retraining.
https://www.tensorflow.org/tutorials/image_retraining
train like that and it is OK:
D:\dev\Anaconda\python D:/dev/detect_objects/tensorflow-master/tensorflow/examples/image_retraining/retrain.py --image_dir D:/dev/detect_objects/flower_photos --bottleneck_dir D:/dev/detect_objects/tensorflow-master/retrain/bottleneck --architecture mobilenet_0.25_128 --output_graph D:/dev/detect_objects/tensorflow-master/retrain/output_graph/output.pb --output_labels D:/dev/detect_objects/tensorflow-master/retrain/output_labels/labels.txt --saved_model_dir D:/dev/detect_objects/tensorflow-master/retrain/saved_model_dir --how_many_training_steps 100
When predict new image like:
D:\dev\Anaconda\python D:/dev/detect_objects/tensorflow-master/tensorflow/examples/label_image/label_image.py --graph=D:/dev/detect_objects/tensorflow-master/retrain/output_graph/output.pb --labels=D:/dev/detect_objects/tensorflow-master/retrain/output_labels/labels.txt --image=D:/dev/detect_objects/flower_photos/daisy/21652746_cc379e0eea_m.jpg
It gives error
KeyError: "The name 'import/Mul' refers to an Operation not in the graph."
label_image.py content:
input_height = 299
input_width = 299
input_mean = 0
input_std = 255
#input_layer = "input"
#output_layer = "InceptionV3/Predictions/Reshape_1"
input_layer = "Mul"
output_layer = "final_result"
What is the problem here?
Change this:
input_height = 299
input_width = 299
input_mean = 0
input_std = 255
#input_layer = "input"
#output_layer = "InceptionV3/Predictions/Reshape_1"
input_layer = "Mul"
output_layer = "final_result"
to this:
input_height = 128
input_width = 128
input_mean = 0
input_std = 128
input_layer = "input"
output_layer = "final_result"
If there is no node in the graph called "import/Mul" and we don't know
what the graph is or how it was produced, there's little chance that
anyone will be able to guess the right answer.
You might try printing the list of operations of your graph using graph.get_operations() and attempting to locate an appropriate-sounding node (try the first one that is printed)
My config file :
agent1.sources = source1
agent1.channels = channel1
agent1.sinks = sink1
agent1.sources.source1.type = spooldir
agent1.sources.source1.spoolDir = /var/SpoolDir
agent1.sinks.sink1.type = hdfs
agent1.sinks.sink1.hdfs.path = hdfs://templatecentosbase.oneglobe.com:8020/user/Banking4
agent1.sinks.sink1.hdfs.filePrefix = Banking_Details
agent1.sinks.sink1.hdfs.fileSuffix = .avro
agent1.sinks.sink1.hdfs.serializer = avro_event
agent1.sinks.sink1.hdfs.serializer = DataStream
#agent1.sinks.sink1.hdfs.callTimeout = 20000
agent1.sinks.sink1.hdfs.rollCount = 0
agent1.sinks.sink1.hdfs.rollsize = 100000000
#agent1.sinks.sink1.hdfs.txnEventMax = 40000
agent1.sinks.sink1.hdfs.rollInterval = 0
#agent1.sinks.sink1.serializer.codeC =
agent1.channels.channel1.type = memory
agent1.channels.channel1.capacity = 100000000
agent1.channels.channel1.transactionCapacity = 100000000
agent1.sources.source1.channels = channel1
agent1.sinks.sink1.channel = channel1
Can anyone help me in getting this resolved. Source file is nearly 400MB its writing bits and pieces in HDFS. example ( 1.5mb to 2mb )
I'm try to create a new game on corona SDK i'm new in lua language, my goal is had a set of enemies in a kind of action game.
For this i think the best way is have a array to store all my enemeis in this case i use three.
So my code is :
local enemies = {}
enemy1 = display.newImageRect( "assets/images/sheep_mini.png", 60, 60 )
enemy1.anchorX = 0
enemy1.anchorY = 0
enemy1.name = 'enemy'
enemy1.id = 1
enemy1.x, enemy1.y = 28, display.contentHeight - 260
enemy1.angularVelocity = 0
enemies[1] =enemy1
enemy2 = display.newImageRect( "assets/images/sheep_mini.png", 60, 60 )
enemy2.anchorX = 0
enemy2.anchorY = 0
enemy1.id = 2
enemy2.name = "enemy"
enemy2.x, enemy2.y = screenW - 120, display.contentHeight - 420
enemy2.angularVelocity = 0
enemies[2] =enemy2
So after that i've a while to iterate to this enemies enemies, but when i try to get the enemies from the array i only getting this :
Mar 31 02:23:36.576: table: 0x600000a66640
Mar 31 02:23:36.577: table: 0x600000a78e00
i'm using this code for doing while :
local len = #enemies
local i= 1
while i <= len do
enemy1 = enemies[i]
print(enemy1)
end
Can you help here? i'm now on corona and also on lua
thanks in advance
What you are trying to achieve can be done through
table.print(enemy1)
For more information I suggest you read this: Table Serialization which explains how:
functions to serialize/unserialize a table or object (usually, not
always, represented as a table), which is to convert it to and from a
string representation. This is typically used for display (e.g.
debugging) or storing data in a file (e.g. persistence).