|
Hi.
How can we convert a RGB code to its equivalent in Hsv ?
Thanks.
|
|
|
|
|
by using some code, that can be found in a CP article; search for RGB HSV
Luc Pattyn [Forum Guidelines] [My Articles]
The quality and detail of your question reflects on the effectiveness of the help you are likely to get.
Show formatted code inside PRE tags, and give clear symptoms when describing a problem.
|
|
|
|
|
Or by doing a simple web search? That's how I found out how to do it.
You measure democracy by the freedom it gives its dissidents, not the freedom it gives its assimilated conformists.
|
|
|
|
|
Hello!
I have implemented a barchart on the surface of Windows-Form. how to print it into hard copy.
regard,
Sohail
|
|
|
|
|
Assuming that you are using .NET, and you don't say!!!
All Control descendants have a DrawToBitmap method. Use that, then all you have to do is work out how to print a Bitmap.
Seriously, there are thousands of examples of how to do this, both here on CP and on t'web. Do a little Binging!
Henry Minute
Do not read medical books! You could die of a misprint. - Mark Twain
Girl: (staring) "Why do you need an icy cucumber?"
“I want to report a fraud. The government is lying to us all.”
|
|
|
|
|
Hi!
i m using .Net and have drawn a chart on windows form's client rectangle. i wanna print it.
Thanks alot sir Henry.
|
|
|
|
|
Then I suggest that you do as I said in the my last post. Look up the documentation for Form.DrawToBitmap , read it and experiment.
After that try Googling, or Binging for c# print bitmap
You might also want to consider filling your Forms client area with a Panel and drawing your bar chart on that, then use Panel.DrawToBitmap instead of the Form version.
Henry Minute
Do not read medical books! You could die of a misprint. - Mark Twain
Girl: (staring) "Why do you need an icy cucumber?"
“I want to report a fraud. The government is lying to us all.”
|
|
|
|
|
Thank You very much sir...!
|
|
|
|
|
I found this OpenGL 3.1 tutorial on Wikiscripts, this is the first part, but author promised that soon there will be a sequel:
|
|
|
|
|
Source is now available to download!
|
|
|
|
|
Hi,
I am trying to develop a CAD like application using OpenGL and wxWidgets... I have reached at a stage where i can draw a circle, square, load an image, polygon etc.. am also able to zoom and pan..
as a further step, i want to move object, scale it and rotate.. to do this i have written a code to select and pick object that lies under mouse pointer.. Now what do i have to do to move the picked object..?? I tried by finding the difference of old and new points and recalculating the points of the object..
C.x += dx;
C.y += dy;
where dx and dy are differences of old and new x, and y values resp.. And C.x C.y are original x and y coordinates
This is showing some movement. but not as desired.. it isnt moving along with the mouse movement.. For a small mouse movement, it shows a huge displacement and also very fast... so what can be done to make a selected object move along wiith the mouse pointer...??
Also i'm not able to zoom with reference to the mouse pointer.. it zooms with reference to the origin (0,0,0) only.. please help me..
Thanks in advance..
|
|
|
|
|
I have a program running on Windows 2000 or better PC's that needs to do a per-pixel operation on a Windows bitmap. Currently I'm using a tight loop that does little more than a pointer increment followed by a quick add operation on each pixel over the entire bitmap. This is done in real-time so I need it fast. On a 320 x 240 image that ends up being 230,400 addition operations since they are RGB bitmaps (320 x 240 x 3). This is being done in real time on an image stream that is pumping out 25 frames per second. At that resolution it's fast enough, but at 640 x 480 I have to start dropping frames significantly to keep up. I am not displaying the modified bitmap to the screen at any time. Instead I am shipping it off to a remote location over the Internet for display at the destination system.
I was wondering if I could push this operation to the Graphics Accelerator using one of Windows Graphics API's like DirectDraw, etc.? Or do PC Graphic accelerators only help with operations that are drawn to the screen (local PC video memory)? I assume that if it's possible I'd need to pump the bitmap to the Graphics Accelerator and then know how to do a global operation on each pixel and then copy it back to local RAM? If it is possible to use the Graphics Accelerator to help with per-pixel operations on a off-screen bitmap, what are the pros and cons?
Finally, is there a way to push JPEG decompression and compression operations onto the Graphics Accelerator?
If it is possible to do these things I would like to know where a quick easy to dive into sample is. I don't have any need for shading or rotations or any complex graphics operations at all like that. Therefore I would like to avoid wading through a ton of reading just to learn how to do a simple global pixel operation task.
Thanks in advance.
|
|
|
|
|
It is well known that calling GetPixel and SetPixel on a bitmap by addressing each pixel separately is an extremely time consuming graphics operation. I recently ran into this thread, (at the MASM32 forum) that tests a number of typical graphics operations and determines the required clock cycles: Graphics, Memory DIB[^]. Skip the assembly code and just check out the listings of clock cycles for the GDI operation,...I think you'll be amazed.
The DirectX SDK comes with a utility that displays the capabilities of your graphics accelerator. Basically, what you want is: GetDeviceCaps[^]. And, if your graphics accelerator supports Pixel Shaders, perhaps that would be faster. But, this is very time-consuming and error-prone.
BitBlt is much faster,...but, I think what you want is to have the bitmap already altered before you need to display it. Undoubtedly, that occurred to you. I'm assuming that you cannot access the bitmap before your application loads, or starts processing.
|
|
|
|
|
Hello Baltoro,
I am not using GetPixel/SetPixel. Instead I am using the Bitmap's scanline property to get a pointer to that memory area and simply walking a pointer over that area in a tight loop.
I do have access to the bitmap, in fact I call Delphi's JPEG code to decompress, with some optimizations I added to avoid needless memory reallocations between JPEG frames. I have since learned that there is an extension called DXVA (DirectX Video Acceleration), but that it is really tough to work with. However, I was also told that there probably already are JPEG compressors on my system that make use of DXVA. It might end up being an issue of learning how (if possible) to use DirectX to utilize those hardware accelerated compressors, but I have no idea to go about doing that currently.
Thanks,
Robert
|
|
|
|
|
Hi,all.
Currently I’ve been developing a Mosaic system of Two Calibrated Cameras using VS2008 and OpenCV , but my algrithm works badly
Here is my program routine, Any advice or help would be greatly appreciated!
1. I calibrated two cameras watching a same area, and undistort the input images,
2. then I use cvFindChessBoardCornerGuesses and cvFindCornerSubPix to get corresponding points of two undistorted input images,
3. and I process the corresponding points using cvFindHomography to get the homography of the two cameras
4. finally use cvWarpPerspective to warp the right camera image plane to the left camera image plane
But the warped image is extremely different from the ideal one
What's wrong with it?
Best Wishes!
|
|
|
|
|
How far apart are your cameras? And when you say that when the right image is warped to the left image plane is far from "ideal", do you know what the ideal image should be from theory or what you just expect it should be?
Have you tried the OpenCV group on Yahoo Groups? You might be more likely to get a good answer there than here.
You measure democracy by the freedom it gives its dissidents, not the freedom it gives its assimilated conformists.I'm a proud denizen of the Real Soapbox[ ^] ACCEPT NO SUBSTITUTES!!!
|
|
|
|
|
Hey,Tim,Thanks for your reply,
My right camera is only 15 degree right rotation from the left camera.
I know the ideal warp result
I've post a message on OpenCV group,
http://tech.groups.yahoo.com/group/OpenCV/message/64370
welcome to join discussion
Do you know how to get the homography of two cameras by the camera matrix or extrinsic matrix?
Thanks!
|
|
|
|
|
Do you have the book "Learning OpenCV"[^]? I noticed a section of homography in it. In my exploration of OpenCV I haven't gotten quite that far. I've been spending a lot of time getting a framework in place so I can quickly and easily write test applications. That and fighting the lack of documentation and weirdness in OpenCV.
You measure democracy by the freedom it gives its dissidents, not the freedom it gives its assimilated conformists.I'm a proud denizen of the Real Soapbox[ ^] ACCEPT NO SUBSTITUTES!!!
|
|
|
|
|
I've got the book "learning Opencv",but it doesn't solved my problem
So how is it going of your framework?
|
|
|
|
|
The one thing I got out of the homography section was that he suggests using at least 10 images when using the chessboard technique to get a good matrix. That's more than I expected.
My framework has been one step forward then two steps back. I had to get a new computer last year and to get it quickly, I took one with Vista preinstalled. OpenCV and Vista don't get along well. I've finally started gaining traction. After a false start, I settled on wxWidgets for the GUI and have a lot of the basics of OpenCV wrapped in C++ to hide some of the ugliness (like the relationship between CvMatrix and IplImage ). Right now I'm working through object detection and obstacle avoidance for mobile robots.
You measure democracy by the freedom it gives its dissidents, not the freedom it gives its assimilated conformists.
|
|
|
|
|
object detection and obstacle avoidance for mobile robots?
That's really an interesting thing.What developing lib are you using? OpenCV or other robot vision lib in sorceForge.net
|
|
|
|
|
Right now I'm using OpenCV, particularly for acquiring the image. There's a lot there for testing ideas but I sometimes wonder if it was really with it. I suspect that for any production system I'll want to recode the algorithms to remove the generality that OpenCV brings and do specialized versions to work with just the format of the camera I'll be using. I've looked at a few other libraries but so far OpenCV has the best support and that's minimal. I do have a friend from Homebrew Robotics here who's interning at Willow Garage so I could get access to Gary Bradski if I really needed to. The last few days I've been looking at what it will take to visually detect the target cones for a RoboMagellan robot. How are you planning to use your stereo rig? Another friend has a two wheel balancing robot that has a stereo vision setup running into a Beagleboard.
You measure democracy by the freedom it gives its dissidents, not the freedom it gives its assimilated conformists.
|
|
|
|
|
My two cameras are relatively fixed,so I want to program a reliable algorithm based on some stable thing just like camera parameter.
In the past,I've done mosaic algorithm based on SIFT,SURF,KLT,Harri Corner,But the mosaic result of these kind of algorithm based of feature points matching is very unstabile,sometimes good,sometimes bad
I've seen some projects of robot stereo vision before,They are mostly based on feature points matching.
|
|
|
|
|
Getting something robust enough to reliably work over a wide range of situations seems to be one of the big stumbling blocks in machine vision. Laser scanners seem to be more widely successful for mobile robots today since they provide easy range information but we're talking $$$. While a number of the DARPA Urban Challenge teams investigated vision systems, I don't think any actually used them for navigation. Willow Garage is using vision but they still have laser scanners on their robot.
You measure democracy by the freedom it gives its dissidents, not the freedom it gives its assimilated conformists.
|
|
|
|
|
Hey Tim,
Today I got another project from my boss;
Car Safety System based on cameras
That means fix some cameras(two or more) on the car, protecting the car from some objects getting too close.
It reminded me of you.I think this project is similar with your project of robot avoiding obstacle.
But my hardware system is DM6467 or DM6446 from TI(Texas Instruments)
Have you developed the software system of robot avoiding obstacle?
|
|
|
|