|
It is well known that calling GetPixel and SetPixel on a bitmap by addressing each pixel separately is an extremely time consuming graphics operation. I recently ran into this thread, (at the MASM32 forum) that tests a number of typical graphics operations and determines the required clock cycles: Graphics, Memory DIB[^]. Skip the assembly code and just check out the listings of clock cycles for the GDI operation,...I think you'll be amazed.
The DirectX SDK comes with a utility that displays the capabilities of your graphics accelerator. Basically, what you want is: GetDeviceCaps[^]. And, if your graphics accelerator supports Pixel Shaders, perhaps that would be faster. But, this is very time-consuming and error-prone.
BitBlt is much faster,...but, I think what you want is to have the bitmap already altered before you need to display it. Undoubtedly, that occurred to you. I'm assuming that you cannot access the bitmap before your application loads, or starts processing.
|
|
|
|
|
Hello Baltoro,
I am not using GetPixel/SetPixel. Instead I am using the Bitmap's scanline property to get a pointer to that memory area and simply walking a pointer over that area in a tight loop.
I do have access to the bitmap, in fact I call Delphi's JPEG code to decompress, with some optimizations I added to avoid needless memory reallocations between JPEG frames. I have since learned that there is an extension called DXVA (DirectX Video Acceleration), but that it is really tough to work with. However, I was also told that there probably already are JPEG compressors on my system that make use of DXVA. It might end up being an issue of learning how (if possible) to use DirectX to utilize those hardware accelerated compressors, but I have no idea to go about doing that currently.
Thanks,
Robert
|
|
|
|
|
Hi,all.
Currently I’ve been developing a Mosaic system of Two Calibrated Cameras using VS2008 and OpenCV , but my algrithm works badly
Here is my program routine, Any advice or help would be greatly appreciated!
1. I calibrated two cameras watching a same area, and undistort the input images,
2. then I use cvFindChessBoardCornerGuesses and cvFindCornerSubPix to get corresponding points of two undistorted input images,
3. and I process the corresponding points using cvFindHomography to get the homography of the two cameras
4. finally use cvWarpPerspective to warp the right camera image plane to the left camera image plane
But the warped image is extremely different from the ideal one
What's wrong with it?
Best Wishes!
|
|
|
|
|
How far apart are your cameras? And when you say that when the right image is warped to the left image plane is far from "ideal", do you know what the ideal image should be from theory or what you just expect it should be?
Have you tried the OpenCV group on Yahoo Groups? You might be more likely to get a good answer there than here.
You measure democracy by the freedom it gives its dissidents, not the freedom it gives its assimilated conformists.I'm a proud denizen of the Real Soapbox[ ^] ACCEPT NO SUBSTITUTES!!!
|
|
|
|
|
Hey,Tim,Thanks for your reply,
My right camera is only 15 degree right rotation from the left camera.
I know the ideal warp result
I've post a message on OpenCV group,
http://tech.groups.yahoo.com/group/OpenCV/message/64370
welcome to join discussion
Do you know how to get the homography of two cameras by the camera matrix or extrinsic matrix?
Thanks!
|
|
|
|
|
Do you have the book "Learning OpenCV"[^]? I noticed a section of homography in it. In my exploration of OpenCV I haven't gotten quite that far. I've been spending a lot of time getting a framework in place so I can quickly and easily write test applications. That and fighting the lack of documentation and weirdness in OpenCV.
You measure democracy by the freedom it gives its dissidents, not the freedom it gives its assimilated conformists.I'm a proud denizen of the Real Soapbox[ ^] ACCEPT NO SUBSTITUTES!!!
|
|
|
|
|
I've got the book "learning Opencv",but it doesn't solved my problem
So how is it going of your framework?
|
|
|
|
|
The one thing I got out of the homography section was that he suggests using at least 10 images when using the chessboard technique to get a good matrix. That's more than I expected.
My framework has been one step forward then two steps back. I had to get a new computer last year and to get it quickly, I took one with Vista preinstalled. OpenCV and Vista don't get along well. I've finally started gaining traction. After a false start, I settled on wxWidgets for the GUI and have a lot of the basics of OpenCV wrapped in C++ to hide some of the ugliness (like the relationship between CvMatrix and IplImage ). Right now I'm working through object detection and obstacle avoidance for mobile robots.
You measure democracy by the freedom it gives its dissidents, not the freedom it gives its assimilated conformists.
|
|
|
|
|
object detection and obstacle avoidance for mobile robots?
That's really an interesting thing.What developing lib are you using? OpenCV or other robot vision lib in sorceForge.net
|
|
|
|
|
Right now I'm using OpenCV, particularly for acquiring the image. There's a lot there for testing ideas but I sometimes wonder if it was really with it. I suspect that for any production system I'll want to recode the algorithms to remove the generality that OpenCV brings and do specialized versions to work with just the format of the camera I'll be using. I've looked at a few other libraries but so far OpenCV has the best support and that's minimal. I do have a friend from Homebrew Robotics here who's interning at Willow Garage so I could get access to Gary Bradski if I really needed to. The last few days I've been looking at what it will take to visually detect the target cones for a RoboMagellan robot. How are you planning to use your stereo rig? Another friend has a two wheel balancing robot that has a stereo vision setup running into a Beagleboard.
You measure democracy by the freedom it gives its dissidents, not the freedom it gives its assimilated conformists.
|
|
|
|
|
My two cameras are relatively fixed,so I want to program a reliable algorithm based on some stable thing just like camera parameter.
In the past,I've done mosaic algorithm based on SIFT,SURF,KLT,Harri Corner,But the mosaic result of these kind of algorithm based of feature points matching is very unstabile,sometimes good,sometimes bad
I've seen some projects of robot stereo vision before,They are mostly based on feature points matching.
|
|
|
|
|
Getting something robust enough to reliably work over a wide range of situations seems to be one of the big stumbling blocks in machine vision. Laser scanners seem to be more widely successful for mobile robots today since they provide easy range information but we're talking $$$. While a number of the DARPA Urban Challenge teams investigated vision systems, I don't think any actually used them for navigation. Willow Garage is using vision but they still have laser scanners on their robot.
You measure democracy by the freedom it gives its dissidents, not the freedom it gives its assimilated conformists.
|
|
|
|
|
Hey Tim,
Today I got another project from my boss;
Car Safety System based on cameras
That means fix some cameras(two or more) on the car, protecting the car from some objects getting too close.
It reminded me of you.I think this project is similar with your project of robot avoiding obstacle.
But my hardware system is DM6467 or DM6446 from TI(Texas Instruments)
Have you developed the software system of robot avoiding obstacle?
|
|
|
|
|
That sounds similar to the OMAP35x they use in the Beagle Board except the OMAP has an ARM8. I haven't gotten that far along yet. I have a level with a laser line generator I want to try to detect with the camera and use the parallax shift to detect objects and get a range estimate similar to this article[^] although I envisioned the laser and camera positions reversed so the line would always be in view on the floor. Wouldn't work too well outside probably. I've been able to detect the spot from a laser pointer fairly reliably with a webcam but haven't gotten around to trying the line yet. Might have to spring for an optical bandpass filter. I'll try some red plastic first.
Are you planning on trying to extract depth with a single image or are you going to use your stereo rig?
You measure democracy by the freedom it gives its dissidents, not the freedom it gives its assimilated conformists.
|
|
|
|
|
Apologies for the late reply,I've been out for a work travel there days.
It sounds your robot project is on the right track. Congratulations!
Now I use stereo rig to extract depth.
What's your E-Mail address?
I sent my project solution and project report to you
Welcome your advice!
|
|
|
|
|
Is anyone aware of any existing work to find the centerline of a font boundary?
How about a suggested method?
For example I would the letter V would actually just be 2 lines instead of 7.
This will be used for engraving text.
I have tried exploding the glyph boundary into lines and then creating perpendicular lines from the mid points. Then I trim to near intersect points. Results are not great.
Thanks,
Jason
|
|
|
|
|
You could find the points top.l and bot.r of a bounding box and find the mid points.
If you are referring to something more complex, please give more detail.
|
|
|
|
|
Hi guys,
Thanks for colleagues who have answered my previous questions. Now my question is a modification of my first question.
How can I read and display from two usb webcams using MATLAB. I did it for just one usb webcam, changed the name of the variable for the second usb webcam but it didn't work.
Thank you in advance.
Sarkuzi
|
|
|
|
|
I don't know matlab well enough to answer this off of the top of my head but if you show us the code that worked for the first webcam and the code that isn't working we might be able to offer ideas on what you might doing wrong.
ps Also list any error messages (if any) that you are getting.
|
|
|
|
|
Hi there,
I'm working on a project to finding the depth from a 3D stereo Camera.
I have been adviced to use "ch professional 6.1" with two usb cameras. Can anyone please point me or send me a complete code that reads from these two cameras and give me the stereo picture. I just want to concentrate on my work... finding depth.
Many thanks in advance
Sarkuzi
|
|
|
|
|
sarkuzi wrote: send me a complete code
I bid $10,000.
Henry Minute
Do not read medical books! You could die of a misprint. - Mark Twain
Girl: (staring) "Why do you need an icy cucumber?"
“I want to report a fraud. The government is lying to us all.”
|
|
|
|
|
I raise you $5,000.
Luc Pattyn [Forum Guidelines] [My Articles]
DISCLAIMER: this message may have been modified by others; it may no longer reflect what I intended, and may contain bad advice; use at your own risk and with extreme care.
|
|
|
|
|
Hi there,
I'm working on a project to finding the depth from a 3D stereo Camera.
I have been advices to use "ch professional 6.1" with two usb cameras. Can anyone please point me or send me a complete code that reads from these two cameras and give me the stereo picture. I just want to concentrate on my work... finding depth.
Many thanks in advance
Sarkuzi
|
|
|
|
|
sarkuzi wrote: Can anyone please point me or send me a complete code that reads from these two cameras and give me the stereo picture. I just want to concentrate on my work... finding depth.
I'm not sure anything like that exist and if it does I kind of doubt that it would be totally reliable in real world situations without some human intervention. The problem I see is that there still is not an AI algorithm that can divide an image perfectly into different objects and object recognition is still a work in progress too. These would be needed for a computer to match the objects from the 2 webcams and then compare the differences in the 2 images to find depth. The only way it might be able to do this I would think is if you had a scene with no gradual transitions between colors of 2 different objects (this rarely exist in real world images but if you made an artificial scene it might work).
There are libraries available (such as the opencv library) that can help you alot to get the input of webcams if you don't mind writing some code. If you are looking for a stereo webcam I did find this on the internet one time http://www.minoru3d.com/[^] but I've never actually used it or tried it.
hope this helps,
Mike
|
|
|
|
|
I've been doing the same recently in c++ with Unity3d in mind, mainly for windows/vista, I suggest the video capture library found:-
www.muonics.net/school/spring05/videoInput/
or CodeVis library by mike ellis also very good, but not so hot with two cameras unless you want to start fiddling with the threading and serialisation.
Once you have the camera images into memory try passing them to open CV for starters (cvstereocorrespondance), if you wish to do realtime you will have to look a bit further, middlesbury college or uni have some stereo matching links to try, plus you could even look at CUDA by nvidea, but this then becomes graphics card capability based.
As soon as I've got my code reasonably working reasonably well I will either provide the code, or list the method code and cite the sources (depends on what licenses I'm restricted by at the time!
|
|
|
|