|
Not in a general way, no.
If you can map your objects uniquely to the natural numbers [0 .. n), then you can do it in O(n) with a storage complexity of S(n). Create an array the size of your list. Iterate through the list, and put each object in its corresponding slot in the array. Then iterate through the array, and rebuild your list.
This places a really bad constraint on your contained objects of course. But that's one price to pay for trying to do the impossible. (It has been proven that you cannot come up with a general sorting algorithm faster than n log n)
--
Kein Mitleid Für Die Mehrheit
|
|
|
|
|
Hi,
I've an image processing problem that I would appreciate any suggestion on how to tackle (ideally in c#):
Problem: I have a series of still images of circular discs (think audio CDs) captured from a video camera. Every time a disc passes the camera a still image is captured. The discs images contain text and graphics, the content can be considered random. When the discs are captured by the camera they are in a random orientation. I’m looking for an algorithm that will automatically rotate the image so that it is ‘aligned’ i.e. the majority of text (if any) is horizontal.
Any suggestions of what to try?
Thanks
|
|
|
|
|
Are the discs some solid color otherwise? Perhaps you could try drawing parallel lines through the image at ~15 degree increments from 0 to 165 degrees. Then compare the number of color changes along those lines. The fewer color changes, the more aligned your image. You could also try finding the center of the disc, then draw circles beginning from the outside. Once you hit something of a different color on opposite sides of the disk, you can compute the angle between them and rotate according to that. You may also want to try bluring the image to begin with, to make any lettering make a stripe of some color on the disk. Perhaps another idea is to pick a common rotationally asymmetric letter, such as 'e' since you know the language on the CD's, and try to find it on the disk. Compare the rotation of the letter to what you expect, and rotate the disk accordingly. These are just some of the approaches that I thought of sitting here, but I have never personally done much image processing. Good luck,
Sounds like somebody's got a case of the Mondays
-Jeff
|
|
|
|
|
Using Audio CDs as an example the discs generally contain some text and some graphics... But the content is virtually random. I think one approach would be to separate the text from the image, work out the orientation of each string on the disc, then rotate the disc image so that the majority of the text is correctly aligned…
So maybe the question is: how do you identify text within an image?
Thanks for your thoughts…
|
|
|
|
|
There are actually two problems: Recognizing the alignment of an image, and rotating the image.
Recognizing the alignment depends on the image's features; it can't be done for random content.
For rotation there are a number of options. The simplest is to use the .NET rotation transform. A sophisticated algorithm that doesn't lose or distort pixels is described in a US patent by Thomas Sterling.
A common straightforward approach is to go through each pixel in the destination image, rotate in the reverse direction, and take the pixel at this location in the source image. This approach allows you to optionally blend with neighboring pixels for antialiasing.
Another approach that works well for binary images is to identify lines of consecutive pixels that are the same color in the source image's scan lines, then rotate these lines and draw them into the destination image. If the rotation takes a coordinate "far" from an integer value, a second line can be drawn to patch gaps that appear due to aliasing.
|
|
|
|
|
I happened accross the Levenshtein algorithm to find the fewest number of changes that can be made to go from one file to another, but I can't figure out how to find the PATH that resulted in that number without taking up O(n*m) memory. Is there some way to find the path WHILE you perform the algorithm? The way I am doing it now is to store the entire n*m array, then find the solution, then traverse the array from end to beginning to find a shortest path. Is there a way, using either another algorithm or by using the modification to Levenshtein where it only takes O(n) memory, where a least-cost change path can be found? Thanks,
Sounds like somebody's got a case of the Mondays
-Jeff
|
|
|
|
|
First you need to define "Binary". Is this a Binary object module with variable length instruction sizes and relative jump offset addressing, or is it a table of numeric data of fixed length?
Dave Augustine
|
|
|
|
|
I have two "files", which I know are made up of some number of bytes. I want to create a difference file that defines the fewest (or near fewest) changes required to convert one "file" into the other. By "file", I mean some finite stream of bytes. I don't know the format, and don't care. All I know is that one "file" was a previous version of the other, so the difference between them should be negligent. However, it could be a bitmap, a word document, a wmv, etc. I did NOT mean two executable files, although after reading your post I see how you came to that conclusion. Any ideas? Thanks,
Sounds like somebody's got a case of the Mondays
-Jeff
|
|
|
|
|
I don't understand the bit about "converting one file into the other". You already have both of the files. I seem to think that you want to keep only one file, and just the difference file to create the other one, just to conserve space. With software, the base is usually kept and differences are collected and then applied in a batch to create the new base. With binary files, however, you probably want to keep the current version and only have the differences to get back the original. This is like the U**X SED for binary, and in reverse.
If this is to be a generic solution for any such archiving, then the solution would probably be different than if it were for one specific file format, or maybe several different solutions depending on the file.
Tell me more about the files.
Dave Augustine.
|
|
|
|
|
Jeff,
I hate to double post, but why don't we move this discussion to another thread of yours because they are so similar. I just responded to your thread about finding maximum substrings. It was at the end of August and the beginning of September last year.
Dave
|
|
|
|
|
Yes, they are similar problems in that I am looking for similarities between two inputs. However, they both require quite different solutions. For example, when finding the longest matching substring, we can stop searching when we cut the potential matches down to the longest current match. Also, the applications are completely different. I was using the longest matching substring algorithm to find matches in strings that were usually smaller than 256 characters. I wrote an algorithm that I thought would be efficient in ONLY finding such a solution. To extend my current solution to find all matching substrings would truely bloat the algorithm, making it very much less efficient than the "dumb" method of doing this. Therefore, the other solution is not usable in this case because I need to find how to modify a string (of bytes) from A to B, and in my case A and B could be very large (say 4 GB) files, so I cannot use an algorithm that requires n^2 memory (such as the levenshtein algorithm that I currently have coded, which takes about 10 minutes for two 15 KB files due to paging the n^2 table in and out of RAM/Cache). However, I CAN afford n^2 time because I am planning on writing this to be a background process to archive old versions of files. These are the reasons why this is a separate post. Thanks in advance for any additional help,
Sounds like somebody's got a case of the Mondays
-Jeff
|
|
|
|
|
Jeff,
Give me a couple of hours to work up the documentation on how to the the MaxStrings function to do a binary file difference. My code is already modified to support this, only the program needs to be changed to read two files instead of only one file for test case string pairs. I may be able to get that done as well. A good test case will be to compare the .EXE files from the first version with the second version.
Dave.
|
|
|
|
|
I have just completed my version to compare two files. I had created a program that just did the MaxSubstring function, and used the source and EXE as the files to compare against with the new file compare version of the files. It compares 161527 bytes to 57897 bytes (161KB to 58KB the source code files) in 1.079 seconds producing a difference file of 13KB. The .EXE comparisons were 41015 bytes to 36916 bytes (41KB to 36KB the .exe files) in .906 seconds producing a 15KB difference file. The algorithm could still stand some tweeking because my debug version (slower but outputs lots of data) shows some sections that consist of many replications of match 1 byte, 1 byte different, match 1 byte... Just addressing each of the differences (file locations and sizes) requires much more file space than the changes themselves. The algorithm is based on my version of the MaxSubstring function. To verify the result, I created another function that took the base file, applied the difference file, reconstructed the second file, and then did a file compare to verify that they were accurate. Are you still interested in the algorithm?
Dave Augustine
|
|
|
|
|
Yeah, I am definitely still interested! I have just been working on this as a side project in my spare time to try and learn a new algorithm/approach. Thanks,
Sounds like somebody's got a case of the Mondays
-Jeff
|
|
|
|
|
The only reason I spent this much time on it was that I thought it was a worthwhile project that I could use myself. Have you got an e-mail I can send the info to?
Dave
|
|
|
|
|
Gullett.16@osu.edu... Thanks again,
Sounds like somebody's got a case of the Mondays
-Jeff
|
|
|
|
|
Jeff,
I do have your E-mail address now. After a complete verification of the process, I uncovered several errors under special conditions, so I had to correct these problems. I have that fixed now, but I then tried to test the algorithm when a file is split across several buffers (file too big for a single buffer) by defining my input buffers smaller than the file sizes I was testing, but ran into several hideous problems with inter-buffer communications. I finally have that working as well. I have a complete, working, tested, solution.
Now, what to send you. I really don't feel comfortable sending the actual code as either an .exe or as an .asm. The implementation is Quick And Dirty, no error checking, assuming everything is correct, etc. I do not want this to ever end up in a commercial environment. Even the explanation is too big to include in a forum post (the algorithm and its explanation probably deserves an article but that would involve a lot of work). Soooo, what do you really need - just the algorithm description so you can implement it in C#?
Dave Augustine.
|
|
|
|
|
Just the description should be fine. Thanks,
Sounds like somebody's got a case of the Mondays
-Jeff
|
|
|
|
|
Jeff,
Description is on the way via E-mail.
Dave Augustine
|
|
|
|
|
Friends,I want to Segment selected region from the video frames, Say from the video frame 1 I selected an actor with no of mouse clicks(to identify the region boundry) manually, I am looking for good and algorithm/Fram Work which can segment it from the bacground or which can set all the pixel except the actor to white in all the rest of the frames
(C++ / C# or any..)
any smart idea pls
Thanx
|
|
|
|
|
I’m posting this question after a conversation with one of my friends.
There are two algorithms that accomplish the same goal – search through XML file. These algorithms are developed by two different academic research groups. A friend of mine works in one of them, and she wants to compare the performance of the algorithms. The problem is that they are implemented in different languages: C++ and Java. Source codes for both are available. The question is – is there an experimental method for comparing the performance of these algorithms without having both of them implemented in the same language? What assumptions are safe to make?
Thanks,
- Nick
|
|
|
|
|
None. Unless the runtimes of the two algorithms are different (say, one is T(n) and the other is T(n*n)), you cannot say which is faster in any language or on any processor with certainty. You will need to either (1) find that the two algorithms have different asymptotic run times, or (2) run benchmarks comparing the two algorithms using different languages, processors, and input data. For the second option, at the end of the day all you can claim is that, "On ___ processor with ___ configuration, and ___ input data, using the ___ compiler (or the ___ runtime engine), the relative runtimes of the two algorithms was ___ and ___". Using this information, you can make certain generalizations with a high level of confidence, but unfortunately asymptotically equivalent algorithms cannot be compared with any level of certainty for all systems, or all compiler optimizations. Good luck,
Sounds like somebody's got a case of the Mondays
-Jeff
|
|
|
|
|
Does anyone know of a good algorithm for calculating minimum whole number ratios.
For instance:
800x600 would be 4x3
300x110 would be 30x11
250x150 would be 5x3
thanks in advance
|
|
|
|
|
Simply divide by the greatest common divisor (gcd)
800x600 gcd(800,600)=200 so you get 4x3 etc
You calculate gcd using Euclid's algorithm, Wikipedia article[^]
Peter
"Until the invention of the computer, the machine gun was the device that enabled humans to make the most mistakes in the smallest amount of time."
|
|
|
|
|
Hello everyone..
I am planning to design a timetable system for my 3rd year project. I would like to know what is the best algorithm to use for this. will be doing my system in C#, will also have a website using ASP.NET! I want a algorithm that is simple and easy to understand - nothing complex. i'm no einstein.
|
|
|
|
|