|
Chris,
Which is the best to install Visual Studio Code or Visual Studio 2019. I might want to try making a ALPR module. I already made a rest API that uses DeepStack see the below link. Some people have issues getting it to work. I think if it is a SenseAI module it will make it a lot easier for people to set it up.
Blue Iris and DeepStack ALPR | IP Cam Talk[^]
Thanks,
Mike
|
|
|
|
|
I like Visual Studio simply because I've used it for 20 years and the keystrokes and integration is so comfortable to me. There's a community version that's free, which is great. The macOS version, however, is horrible, and for that I way prefer VS Code.
Personal opinion is I'd use VS 2022 on Windows, VS Code elsewhere.
cheers
Chris Maunder
|
|
|
|
|
Is there a way to set the minimum size of a detected object somehow?
The detection itself works like a charm but I get quite some false positives due to sun, shadow and time of the day.
For better understanding I attached an image: Imgur: The magic of the Internet[^]
|
|
|
|
|
I love that idea! At the moment: no. But, easy enough to add so we'll queue that up as a feature. It would be a % width / % height thing.
cheers
Chris Maunder
|
|
|
|
|
Sounds perfect! Thanks for your fast reply!
|
|
|
|
|
What a great idea, finally we've have a way to stop cats showing up as people/person!
|
|
|
|
|
|
As you probably know, it worked well integrating with BlueIris (I just pointed the DeepStack URL to be my SenseAI url instead). Easy peezy, lemon squeezy. The whole setup and configuration took less than 15 minutes, only 2 noticeable "issues". First, the AI status button within BI no longer returns anything (as the status API calls are different), and the Face recognition is not triggering. DeepStack would automatically return face info if a person was seen, in this format:
{"type":"MOTION_A", "trigger":"ON", "memo":"person:81%", "analysis":
[{"api":"objects","found":{"success":true,"predictions":
[{"confidence":0.54902047,"label":"laptop","y_min":1736,"x_min":775,"y_max":2456,"x_max":1487},
{"confidence":0.5561656,"label":"chair","y_min":2193,"x_min":1266,"y_max":2945,"x_max":1976},
{"confidence":0.6813332,"label":"person","y_min":1274,"x_min":580,"y_max":2817,"x_max":1389},
{"confidence":0.9031852,"label":"car","y_min":565,"x_min":702,"y_max":1044,"x_max":1553}],
"duration":0}},
{"api":"faces","found":{"success":true,"predictions":
[{"confidence":0.80594426,"userid":"Jay","y_min":1354,"x_min":916,"y_max":1662,"x_max":1146}],
"duration":0}}]}
I'm using a python script to currently catch any 'person' matches, then send to the faces API. Would it be worth changing the /v1/vision/detection API to add the faces api to it's response (only if there are faces registered)? Or, it could be that the BlueIris team is already building in both options to their NVR tool... (I'll try to ping them as well, but they are usually a pretty forward-thinking bunch).
|
|
|
|
|
I'll pass this on to Ken @ Blue Iris and see what's happening his end but: I think the issue may be that if you have existing face registrations it won't pick them up because the location of the persisted face registration database has changed.
If you're brave you may want to try this:
- Head to where senseAI is installed. C:\Program Files\CodeProject\SenseAI
- Go to the Vision module's folder: C:\Program Files\CodeProject\SenseAI\AnalysisLayer\DeepStack
- Edit the modulesettings.windows.json file to read:
{
"Modules": {
"FaceProcessing": {
"Runtime": "python37",
"EnvironmentVariables": {
"DATA_DIR": "%MODULES_PATH%\DeepStack\datastore"
}
}
}
}
- Restart the senseAI service
Let me know if that helps.
cheers
Chris Maunder
|
|
|
|
|
Thanks, I put this in (well, put in forward slashes after it complained):
"DATA_DIR": "%MODULES_PATH%/DeepStack/datastore"
Didn't seem to add the face support, but will keep checking. Thanks, and excited for the next things added.
|
|
|
|
|
Yeah sorry - that should have been double back slashes, but single forwards work too.
Can you try going to localhost:5000 to open up the dashboard and let me know if the log output shows anything that looks like an error? You can also click on the Explorer (top centre button of the dashboard) to launch the senseAI explorer, and that will allow you to test the face registration / recognition / listing.
Any info you can provide is helpful.
cheers
Chris Maunder
|
|
|
|
|
Here's what I'm seeing from the startup logs, nothing jumps out as an error:
10:06:52 AM: App directory C:\Program Files\CodeProject\SenseAI
10:06:52 AM: Started Background Removal
10:06:52 AM: Not starting Legacy Object Detection: Not set as enabled
10:06:52 AM: Started Portrait Filter
10:06:52 AM: Started Sentiment Analysis
10:06:53 AM: PortraitFilter.dll: Starting inference session...
10:06:54 AM: PortraitFilter.dll: info: Microsoft.Hosting.Lifetime[0]
10:06:54 AM: PortraitFilter.dll: info: Microsoft.Hosting.Lifetime[0]
10:06:54 AM: PortraitFilter.dll: info: Microsoft.Hosting.Lifetime[0]
10:06:54 AM: ObjectDetector.dll: info: Microsoft.Hosting.Lifetime[0]
10:06:54 AM: ObjectDetector.dll: info: Microsoft.Hosting.Lifetime[0]
10:06:54 AM: ObjectDetector.dll: info: Microsoft.Hosting.Lifetime[0]
10:06:54 AM: SentimentAnalysis.dll: 2022-06-21 10:06:54.821194: I tensorflow/cc/saved_model/reader.cc:43] Reading SavedModel from: C:\Program Files\CodeProject\SenseAI\AnalysisLayer\SentimentAnalysis\sentiment_model
10:06:54 AM: SentimentAnalysis.dll: 2022-06-21 10:06:54.827749: I tensorflow/cc/saved_model/reader.cc:148] Reading SavedModel debug info (if present) from: C:\Program Files\CodeProject\SenseAI\AnalysisLayer\SentimentAnalysis\sentiment_model
10:06:54 AM: SentimentAnalysis.dll: To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
10:06:54 AM: SentimentAnalysis.dll: 2022-06-21 10:06:54.896526: I tensorflow/cc/saved_model/loader.cc:283] SavedModel load for tags { serve }; Status: success: OK. Took 75320 microseconds.
10:06:54 AM: scene.py: Scene Detection module started.
10:06:55 AM: Latest version available is 1.4-Beta
10:06:55 AM: Version check: This is the latest version
10:06:55 AM: PortraitFilter.dll: Background Portrait Filter Task Started.
10:06:55 AM: SentimentAnalysis.dll: info: Microsoft.Hosting.Lifetime[0]
10:06:55 AM: SentimentAnalysis.dll: info: Microsoft.Hosting.Lifetime[0]
10:06:55 AM: SentimentAnalysis.dll: info: Microsoft.Hosting.Lifetime[0]
10:06:55 AM: SenseAI Object Detection module started.
10:06:55 AM: face.py: Face Detection module started.
10:06:56 AM: sense_rembg_adapter.py: RemoveBackground module started.
10:06:56 AM: textsummary.py: TextSummary module started.
10:06:56 AM: SentimentAnalysis.dll: info: SentimentAnalysis.SentimentAnalysisWorker[0]
10:06:56 AM: Sentiment Analysis module started.
Also, just to be clear, face recognition is working great (and fast!) for me:
piece = <PIL Image Object>
buf = io.BytesIO()
piece.save(buf, format='JPEG')
byte_im = buf.getvalue()
response = requests.post("http://daedelus.lan:5000/v1/vision/detection", files={"image": byte_im}).json()
for object in response["predictions"]:
print("{} - {}%".format(object["label"], round(object["confidence"]*100, 1)))
print(response)
response = requests.post("http://daedelus.lan:5000/v1/vision/face", files={"image": byte_im}).json()
print('FACE finder: {}'.format(response))
for object in response["predictions"]:
face_boxes = [object["x_min"], object["y_min"], object["x_max"], object["y_max"]]
print("{} - {}%".format(face_boxes, round(object["confidence"]*100, 1)))
face = piece.copy().crop((face_boxes[0], face_boxes[1], face_boxes[2], face_boxes[3]))
face.save(buf, format='JPEG')
byte_im = buf.getvalue()
response = requests.post("http://daedelus.lan:5000/v1/vision/face/recognize", files={"image": byte_im}).json()
print('FACE recognizer: {}'.format(response))
for object_f in response["predictions"]:
print("FACE: {} - {}%".format(object_f["userid"], round(object_f["confidence"]*100, 1)))
retults:
person - 92.3%
tennis racket - 38.7%
car - 92.5%
potted plant - 34.1%
tennis racket - 52.7%
{'predictions': [{'label': 'person', 'confidence': 0.9226199, 'y_min': 662, 'x_min': 547, 'y_max': 1522, 'x_max': 1025}, {'label': 'tennis racket', 'confidence': 0.38740522, 'y_min': 793, 'x_min': 538, 'y_max': 895, 'x_max': 771}, {'label': 'car', 'confidence': 0.924697, 'y_min': 29, 'x_min': 17, 'y_max': 509, 'x_max': 820}, {'label': 'potted plant', 'confidence': 0.34054393, 'y_min': 1353, 'x_min': 393, 'y_max': 1550, 'x_max': 652}, {'label': 'tennis racket', 'confidence': 0.5270328, 'y_min': 897, 'x_min': 270, 'y_max': 1260, 'x_max': 808}], 'success': True}
FACE finder: {'success': True, 'predictions': [{'confidence': 0.6121846437454224, 'x_min': 829, 'y_min': 682, 'x_max': 890, 'y_max': 778}]}
[829, 682, 890, 778] - 61.2%
FACE recognizer: {'success': True, 'predictions': [{'confidence': 0.6947708129882812, 'userid': 'Jasmine', 'x_min': 829, 'y_min': 682, 'x_max': 890, 'y_max': 778}]}
FACE: Jasmine - 69.5%
It's just not returning the face information through BlueIris when I point BI at the SenseAI url. It is returning all the results of the detection predictions. I don't have enough insight to see what BI is calling internally and whether the APIs match.
Thanks again for your attention!
|
|
|
|
|
Hey guys congrats on the project it's excellent.
Seems it's fully compatible with Agent DVR - might want to add that to the compatibility docs with a setup walkthrough?
Agent DVR DeepStack AI/ senseAI[^]
|
|
|
|
|
That's fantastic! I'll try it out and add it to the docs.
Thanks for the tip. 👍
cheers
Chris Maunder
|
|
|
|
|
|
Funny you should mention that! We were talking about exactly this today and I'd like it to be done this week.
I have a personal need of a racoon detector. It's a long, painful story.
cheers
Chris Maunder
|
|
|
|
|
Great, I also am working on a Python rest api that uses DeepStack to do ALPR, do you think there is a way to add it to this project. I am not a programmer but with Google I was able to make it work.
|
|
|
|
|
I see 365 scene types and 80 objects can be detected. Is there a list of detectable scenes and objects?
|
|
|
|
|
I think they are using YOLOv5 models which has the below objects
'person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train', 'truck', 'boat', 'traffic light',
'fire hydrant', 'stop sign', 'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', 'cow',
'elephant', 'bear', 'zebra', 'giraffe', 'backpack', 'umbrella', 'handbag', 'tie', 'suitcase', 'frisbee',
'skis', 'snowboard', 'sports ball', 'kite', 'baseball bat', 'baseball glove', 'skateboard', 'surfboard',
'tennis racket', 'bottle', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl', 'banana', 'apple',
'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza', 'donut', 'cake', 'chair', 'couch',
'potted plant', 'bed', 'dining table', 'toilet', 'tv', 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone',
'microwave', 'oven', 'toaster', 'sink', 'refrigerator', 'book', 'clock', 'vase', 'scissors', 'teddy bear',
'hair drier', 'toothbrush'
|
|
|
|
|
I don't see the keyword "package". That's a bummer for my camera watching the front porch.
But hey... Detecting a toilet can be good.
|
|
|
|
|
Yeah - the amazon package delivery is a custom model we will add (or even better: source from someone kind!)
It's high on my TODO list.
cheers
Chris Maunder
|
|
|
|
|
Chris,
I could look into making a package model or add it to one of my custom models.
Mike
|
|
|
|
|
Yes please!
cheers
Chris Maunder
|
|
|
|
|
What model format are you using from what I can tell it is ONNX?
|
|
|
|
|
We're going to lean on Deepstack's model training, which is essentially the YOLO training (same code more or less) which outputs PyTorch .pt files.
cheers
Chris Maunder
|
|
|
|
|