|
We know how much y's would enter the queue beforehand, once the collection of Xs start working there's no stopping unless we are past the MaxTime limit. EACH X should not work more than given MaxTime. How many Xs do I supply to work on a given amount of Ys. When each X takes another Y from the queue as soon as he finishes with a previous Y?
|
|
|
|
|
N = (Y * T1) / (X * T2) where T2 "is less than MaxTime".
The Master said, 'Am I indeed possessed of knowledge? I am not knowing. But if a mean person, who appears quite empty-like, ask anything of me, I set it forth from one end to the other, and exhaust it.'
― Confucian Analects
|
|
|
|
|
I have think out a sorting algorithm.
It's comparison count is fewer about 12% than gcc quick sort. (By 1000 item sort test 1000 times average)
If I am scholar, it is a case of dissertation.
But I'm not so. (I am smart-phone application developer.)
(I had ever made a article about previous version algorithm and submitted to Cornell University and it was refused. It said to go to forum like this.)
The characteristic is:
.Variation of merge sort, it would be classified.
.It is in-place. But different from "In-place merge sort" (Normally merge sort is not in-place.)
I think it may be fastest sorting ever. But I have now no way to publish.
Is there someone who knows the good way to announce it to the people who can evaluate it?
(The program list is a little long to include in this message.
I have not made explanation document about it. I am still testing and modifying it.)
|
|
|
|
|
Member 14560162 wrote: But I have now no way to publish.
Is there someone who knows the good way to announce it to the people who can evaluate it?
Have you considered publishing it as an article here?
Submit a new Article[^]
They are designed for code and algorithms, plus the explanation of them.
Here's a couple of mine, to give you the idea:
Using struct and class - what's that all about?[^]
List<T> - Is it really as efficient as you probably think?[^]
You'll reach up to 14,000,000 software developers, and get feedback (positive and / or negative) fairly quickly once it's moderated.
Sent from my Amstrad PC 1640
Never throw anything away, Griff
Bad command or file name. Bad, bad command! Sit! Stay! Staaaay...
AntiTwitter: @DalekDave is now a follower!
|
|
|
|
|
Thank you for answer.
I had not reached article page before you indicate.
I try to write article.
But the submit will be need times because I have written nothing and I must write it between other work.
|
|
|
|
|
No rush - take your time!
Articles tabn=ke me a fair amount of time to write as well - often several times longer than the code they are based on.
Sent from my Amstrad PC 1640
Never throw anything away, Griff
Bad command or file name. Bad, bad command! Sit! Stay! Staaaay...
AntiTwitter: @DalekDave is now a follower!
|
|
|
|
|
You can mark the article as a work in progress and simply save it for further editing without having to publish in an unfinished state (which would be bad thing to do).
|
|
|
|
|
I correct some point in my question.
"Quicksort is the fastest sorting" is my misunderstanding. Merge sort is faster if comparion count is the topic.
The sorting I think out does the same operation as "Bottom-up implementation" of merge sort without using another array.
|
|
|
|
|
Hello.
I would like to ask you if anybody is able to write Boruvka and/or Prim´s algorithm in Pascal? Thank you.
|
|
|
|
|
An experienced Pascal programmer could probably do it. However, this site does not provide code to order, so you will need to recruit someone yourself.
|
|
|
|
|
So, this question is bugging me because my calculus is rusty. . . So here I am seeking help. I thought the answer was '0' but I dismissed it, because 0 implies instant termination and therefore a crash. But, if I pick '0' then 2^n = 1, and 100n^2 = 0, which makes 100n^2 faster. Of course, but that shouldn't be correct.
When I punch in n = 32, 2^32 = 4,294,967,296
32*32 *100 = 102400
20*20 *100 = 40,000
So I brute forced to find the first occurrence of 100*n^2 being faster than 2^n, it was 2^15.
How could I have found this efficiently?
|
|
|
|
|
You could always cheat and use Wolfram|Alpha:
https://www.wolframalpha.com/input/?i=100+n%5E2+%3C+2%5En[^]
The solution seems to involve the Lambert W-Function[^], so it's not simple.
Looking at the generated number line, the lowest non-zero solution is 14.3247, which matches your brute-force solution.
Given the small input range (1-31), brute force is probably still the best option:
int n = Enumerable.Range(1, 31).First(n => 100 * n * n < Math.Pow(2, n));
"These people looked deep within my soul and assigned me a number based on the order in which I joined."
- Homer
|
|
|
|
|
Hi friend, lol. . .But Batman doesn't cheat, so I won't either!
Anyway, I found a solution a couple of days ago, I wanna share it with you.
The answer was 15, 14 is still too small.
There are two ways I know how to do this after stackexchange senpai's have taught me.
The first is Binary search.
You take a range from let's say 10-20 since it was easy to isolate.
you do |20+10|/2 = n. Then you plug n in, if n satisfies the condition, you're done.
Another way is the fancier and effective way.
You log both sides of the equation. log2^n and log100n^2
Which gives you n * log2 = log100 + 2logn
which now gives n = log100 + 2logn
which is then in turn n = 7 + 2log(n)
n = 7 + 2log(n) is now your f(n).
So you just plug n into f(n) and if f(n) is true, that is your result.
|
|
|
|
|
Some of the techniques require building chains that jump from one candidate to another, sometimes to different cells, sometimes to different candidates within the same cell.
The chain lengths can be anywhere between 4 and 30 nodes long, and the amount of possible chains that can be made is quite a large number.
My code is technically working and can find the chains I am looking for, but it can take 20 seconds to find a single useful chain sometimes.
Right now I am building the chains using nodes in a tree in a linked-list format, and using recursion to continue the chain. When a terminal node is found it is added to a list to be tested. Then it breaks the chain into smaller ones and tries them too.
The problem is that I am probably building chains more than once (because a chain is the the same both forwards and backwards), and I am not sure how I can eliminate redundancy.
Or maybe the problem is that I am using recursion and I should be using iteration, but I am not sure the best method to iterate with.
This is the recursive call within the function:
if (currNode.children.Count > 0)
{
for (int n = 0; n < currChildCount; n++)
{
temp = currNode.children[n];
BuildAIC(temp, temp.note, terminalNodes, !findStrongLink);
When the recursive loop finishes, I will have all the chains that can be built starting from one point.
I iterate through each point and each note in each point and then check the terminal nodes like this:
foreach (Point p in UnsolvedCells)
{
foreach (int note in cells[p.Y, p.X].notes.ToList())
{
ancestor = new NodeAIC(p, note);
BuildAIC(ancestor, note, terminalNodes, true);
for (int n = 0; n < terminalNodes.Count; n++)
{
Can anyone give me some ideas?
|
|
|
|
|
Seems to me that a "Sudoku solver" "learns" with each pass and tags cells as to possible and not possible for the number range.
I don't see any learning in your version.
The Master said, 'Am I indeed possessed of knowledge? I am not knowing. But if a mean person, who appears quite empty-like, ask anything of me, I set it forth from one end to the other, and exhaust it.'
― Confucian Analects
|
|
|
|
|
|
Hi guys,
I'm studying for a computing exam and came past the following question on a past paper and need help with it.
When would algorithm A be slower than algorithm B? Demonstrate your answer with the help of an example.
Algorithm A
SET sum TO 0
FOR i=1 to size
FOR j=1 to 10000
sum=sum+1
Algorithm B
SET sum TO 0
FOR i=1 to size
FOR j=1 to size
sum=sum + 1
I came up with this answer but not sure if it is correct:
The algorithm A will be slower than algorithm B when the performance of the algorithm is directly proportional to the cubed or more of the size of the input data set, for example if the Big O notation becomes O(N3) or O(N4) or O(N5) etc. The Big O notation O(N3) nesting the for loops in two more for loops:
Set Sum TO 0
For i=1 to size
For k=1 to size
For l=1 to size
For j=1 to 10000
sum=sum+1
|
|
|
|
|
Member 14525747 wrote: The algorithm A will be slower than algorithm B when the performance of the algorithm is directly proportional to the cubed or more of the size of the input data set While I can't look into the mind of the designer of the question, I'm pretty sure that the intent was to analyze the algorithms as they are, so "when" means "for what size ", and not "what if I change the algorithm".
|
|
|
|
|
A runs "longer" than B when size < 10,000.
A runs the same as B when size == 10,000.
A runs faster than B when size > 10,000.
Or, A is slower than B while size < 10,000.
(It was called "playing computer" or desk checking).
The Master said, 'Am I indeed possessed of knowledge? I am not knowing. But if a mean person, who appears quite empty-like, ask anything of me, I set it forth from one end to the other, and exhaust it.'
― Confucian Analects
|
|
|
|
|
Fonction f(a: entier, b: entier): integer
var r, z: integer
Begin
r <-- 0
z <-- 1
while(a != 0)
r <-- r + (a mod 10) * z
z <-- z * b
a <-- a div 10
endWhile
return r
End
|
|
|
|
|
The best way to find out is to change this pseudo-code into a real function.
Tip: if i'm not wrong it returns a
Good luck!
|
|
|
|
|
Quote: what does this algorithm do ?
the first thing to do is to make it a program and run it with many sample data and analyze the results.
Another way is to simulate on paper and note the values of variable as it process the data.
Patrice
“Everything should be made as simple as possible, but no simpler.” Albert Einstein
|
|
|
|
|
I want to ask one general question here, We have seen 32 bit and 64 bit Operating System.
Is there any other larger size bit Operating System exist? If yes can you please share the link to download it.? i want the best speed operating system.
|
|
|
|
|
Oh dear ... this question is proof that "a little knowledge is a dangerous thing".
No, there aren't "larger OSes than 64 bit": And if there were, they probably wouldn't be faster (in fact the might be slower) and they wouldn't run on your hardware because it doesn't support 128 bit operations. And it would need crazy amounts of RAM as all pointers would be 128 bits.
You want a fast operating system? Go backwards to DOS - it's fast a heck compared to any modern OS, simply because it is small, and doesn't support all the "bells and whistles" that Windows or Linux do: no GUI for example.
Instead of thinking "fastest possible OS", think parallel processing and distribute your task across multiple processors: get it right and it's both scalable and dramatically faster than changing your OS...
Sent from my Amstrad PC 1640
Never throw anything away, Griff
Bad command or file name. Bad, bad command! Sit! Stay! Staaaay...
AntiTwitter: @DalekDave is now a follower!
|
|
|
|
|
Ten years ago there were rumours circulating that Microsoft was working on a 128-bit version of Windows. They were aiming for Windows 8, or definitely Windows 9.
Maybe that's why Windows skipped straight to v10?
Microsoft Working on 128-bit Windows[^]
"These people looked deep within my soul and assigned me a number based on the order in which I joined."
- Homer
|
|
|
|