|
Kirk Wood wrote: I don't see a huge difference between a person who spends say $20 at a theater for two hours of enjoyment and one who spends $20 playing Keno for a couple hours of enjoyment.
Exactly. Playing Keno is about as interesting as watching a PC run while betting that an alpha particle will take out a bit in the RAM. The odds are similar, too. But my favorite lady loves the game, and luckily prefers to play $.05 at a time, so I can spend hours of "quality time" with her for the price of a movie ticket and popcorn. Once in a while, she hits a big one and it really brings a smile to her face! Last week she hit 6 out of 6 on my birthday, and that was in a bonus round when jackpots are doubled. A win of $300 on a bet of 5 nickels is nothing to sneeze at, and creates a lot of joy for a lady with too little of it in her life.
I knew about craps, but I don't play it as it is awfully fast and intimidating to me; I just don't understand the rules, I guess. But I didn't know that straight video poker has a slight edge for the player. I usually play the Double Double Bonus poker, because of the payout for a 4 of a kind with a 'kicker', even though I know the lower hands pay out less than normal. What would be an optimal selection strategy for normal video poker? For example, in DDB poker, pairs are preferred over a single face card, as a pair of face cards pays even money, but 3 of a kind pays 3 to 1. In Deuces Wild, I save 2s and pairs, because Jacks or better pays nothing. Is there a strategy that works best for the stright poker game?
Will Rogers never met me.
|
|
|
|
|
|
To get the probability of getting 3 numbers correct just take the total number of selected values divided by the total number of balls. This gives 3 divided by 80 , not 20.
The probability of getting 6 numbers correct is therefore 6/80.
To determine the probability of your lucky numbers appearing in any number of consecutive draws take this following example.
Whats the probability of 3 of your numbers appearing in four draws?
Its simply got by multiplying individual probabilities by each other. So the answer should be got by getting (3/80)cubed or raised to power three, ie, (3/80)^3
if you have the zeal and will, i can make you a more elaborate c# software showing the workings of probability in action free of charge but it will take a while.
|
|
|
|
|
Thanks for the kind offer, but no thank you. Your algorithm is seriously flawed, yielding a higher probability of getting 6 right than three. But I appreciate the offer!
Will Rogers never met me.
|
|
|
|
|
Hi,
I'm looking for Open Source Audio DSP Algorithms or Filters? (.e.g Delay, Reverb, Chorus, etc.)
LGPL/MIT/BSD are preferred, GPL is very limiting due to its viral/strict non-commercial nature.
The language is not critical - C/C++/Java/C# are all good.
If you know of such a good framework please let me know.
Thanks,
Yuval
"The true sign of intelligence is not knowledge but imagination." - Albert Einstein
|
|
|
|
|
Yuval Naveh wrote: Open Source Audio DSP Algorithms or Filters? (.e.g Delay, Reverb, Chorus, etc.)
Hi
See this Link[^]
|
|
|
|
|
Hi,
Thanks for the link
Yuval
"The true sign of intelligence is not knowledge but imagination." - Albert Einstein
|
|
|
|
|
I am wondering if there is a general formula to find the number of integer solutions to an equation in two variables? For example:
Given that x, y are both positive integers, find the number of distinct (x, y) pairs that yield a given value z in the following equation:
3x^2 + y^2 = z
Essentially, I am attempting to find an f(z) that yields the following values (the pairs x,y are included after the function value):
f( 1) = 0
f( 2) = 0
f( 3) = 0
f( 4) = 1 {(1, 1)}
f( 5) = 0
f( 6) = 0
f( 7) = 1 {(1, 2)}
f( 8) = 0
f( 9) = 0
f(10) = 0
f(11) = 0
f(12) = 1 {(1, 3)}
f(13) = 1 {(2, 1)}
...
f(28) = 3 {(1, 5), (2, 4), (3, 1)}
...
f(91) = 2 {(3, 8), (5, 4)}
... I don't care about finding the pairs that yield the solution, only the number of solutions (actually, I only care if the number of solutions is odd or even, if that helps). If there is such a function, what is the methodology of derivation? Thanks,
Sounds like somebody's got a case of the Mondays
-Jeff
|
|
|
|
|
Let's rewrite your equation as y2 = z - 3x2. Then y = sqrt(z - 3x2).
We can easily see that there are no solutions for z < 3. Now we can write the following code (is C code, but it's easy to port it to another language):
int f(int z)
{
int result = 0;
int x = 0;
double m = 0;
double y;
if (z < 3) return 0;
while (m <= z)
{
y = sqrt(z - m);
if (y == floor(y)) result++;
x++;
m = 3*x*x;
}
return result;
}
Note that this function has a complexity of O(z-2) as it checks for all possible integers values of x.
Also note that this implementation accept solutions where x and/or y are zero (e.g. f(4) returns 2 and the solutions found are { 0, 2 } and { 1 ,1 }). If you want solutions with x and y stricly positive you should initialize the loop with x = 1 and m = 3 , and inside the loop ignore solutions with y == 0 .
|
|
|
|
|
Hi Jeff,
In my experience formulas giving the number of solutions of some problem are rare in number theory (Diophantine equations). As an example, there are some formulas that estimate the number of primes in a given range, however all they give is an approximation, not a exact value.
So I'm afraid you need some loops, as Sauro suggested. You may organize it differently though: by calculating z from x and y, and storing all the results, you could build a large table much faster than you could analyze a large number of z values one by one; this is similar to the concept of the Sieve or Eratosthenes.
|
|
|
|
|
The code that I have implemented actually does exactly what you suggest. I have an outer loop for y and an inner loop for x, and just flip a boolean value in an array each time I find a solution (since I only need to know the number of solutions mod 2). However, the algorithm I am attempting to implement claims to have a better O() runtime than another algorithm I implemented to do the same thing, and I am just not seeing this claim play out in my implementation of the two algorithms. In fact, I am seeing just the opposite, and determining the number of solutions to this equation is really the only difference between the two. I guess I'll just have to look a bit harder at my loops to see if I can do anything more efficiently. Thanks,
Sounds like somebody's got a case of the Mondays
-Jeff
|
|
|
|
|
if you want to do a mapping approach such as Eratostenes over a huge range, applying a banding technique pays off big time. I investigated prime generation more than a year ago (still should finish the article!), and upped the performance more than 10 times by making sure the output mapped well into the level two cache. IIRC I am getting 20 million primes per second on a typical machine.
|
|
|
|
|
If you will check several values of Z per session, and memory usage is not a problem, you might store the work of the algorithm as you check for a value of Z. Errr... confusing, It's hard for me to explain this in English. Let me give you an example in C#:
public class Solutions
{
Dictionary<int, bool> matches = new Dictionary<int, bool>();
int maxZ = 0;
public bool? EvenSolutions(int z)
{
if (z > maxZ)
{
int maxY = (int)Math.Sqrt(z - 3);
int maxX = (int)(Math.Sqrt(z - 1 / 3d));
int n = 4;
for (int x = 1; x < maxX; x++)
{
for (int y = 1; y < maxY && (n = 3 * x * x + y * y) <= z; y++)
{
if (n > maxZ)
{
if (!matches.ContainsKey(n))
matches.Add(n, false);
else
matches[n] = !matches[n];
}
}
}
maxZ = z;
}
return (matches.ContainsKey(z)) ? (bool?)matches[z] : null;
}
}
The method returns true if there is a even number of solutions; false if the number of solutions is odd; and null if there are no solutions but, at the same time, it stores the possible values of Z which have solutions and are lesser than the given Z. So, if you are later checking for a smaller value of Z, you will already have it in matches dictionary and you will not have to repeat the process... and if it is not in the dictionary then it means that there weren't solutions for that value of Z.
|
|
|
|
|
Hi Jeff,
With VC2010 (or gcc 4.5) C++0x implementation:
#include <utility>
#include <array>
#include <algorithm>
template <unsigned int z>
size_t NumSolutions()
{
typedef std::pair<unsigned int, unsigned int> Solution;
std::array<Solution, z * z + 2 * z> All;
std::generate(All.begin(), All.end(),
[]()->Solution{
static unsigned int x = 0, y = 0;
return y == z ? Solution(++x, y = 0) : Solution(x , ++y);});
return std::count_if(All.begin(), All.end(),
[](const Solution& sol){return z == 3 * sol.first * sol.first + sol.second * sol.second;});
}
With previous C++03 compilers:
#include <utility>
#include <vector>
#include <algorithm>
typedef std::pair<unsigned int, unsigned int> Solution;
template <unsigned int z>
bool IsSolution(const Solution& sol)
{
return z == 3 * sol.first * sol.first + sol.second * sol.second;
}
template <unsigned int z>
Solution Next()
{
static unsigned int x = 0, y = 0;
return y == z ? Solution(++x, y = 0) : Solution(x , ++y);
}
template <unsigned int z>
size_t NumSolutions()
{
std::vector<Solution> All(z *z + 2 * z);
std::generate(All.begin(), All.end(), Next<z>);
return std::count_if(All.begin(), All.end(), IsSolution<z>);
}
You may test both with:
#include <iostream>
int main()
{
using std::cout;
using std::endl;
cout << "NumSolutions<4>(): " << NumSolutions<4>() << endl;
cout << "NumSolutions<7>(): " << NumSolutions<7>() << endl;
cout << "NumSolutions<28>(): " << NumSolutions<28>() << endl;
return 0;
}
cheers,
AR
Edited for correct size of All vector.
Edited for use of adequate std::array in C++0x version.
When the wise (person) points at the moon the fool looks at the finger (Chinese proverb)
modified on Friday, November 26, 2010 4:45 PM
|
|
|
|
|
Below, there is an algorithm which I wrote to find lower bound of the given key in a sorted table. In other words, it finds a key which is <= of a given one. I wonder whether there is a more compact algorithm or maybe more efficient.
#include <stddef.h>
static size_t blsize;
static const void *skey;
static int (*cmpf)(const void *e1, const void *e2);
static void *lbnd(const void *base, size_t size)
{
const char *p,*p1;
int c;
size_t n;
if(size==0) return NULL;
if(size==1)
{
c = (*cmpf)(skey,base);
if(c<0)
return NULL;
else
return (void *)base;
}
n = size >> 1;
p = (char *)base + n*blsize;
c = (*cmpf)(skey,p);
if(c==0) return (void *)p;
if(c<0)
{
return lbnd(base,n);
}
else
{
p1 = lbnd((char*)p + blsize,size - n - 1);
if(p1==NULL)
return (void *)p;
else
return (void *)p1;
}
}
void *lbound(const void *key, const void *base0, size_t nmemb, size_t size, int (*compar)(const void *e1, const void *e2))
{
blsize = size;
cmpf = compar;
skey = key;
return lbnd(base0,nmemb);
}
The static function lbnd does the search using static variables which contain constant values because lbnd is called recursively and to minimize code overhead in parameters passing.
|
|
|
|
|
“I think the bubble sort would be the wrong way to go.”
|
|
|
|
|
Maybe yes maybe no. The question is not about sorting but about searching the sorted table, though.
|
|
|
|
|
Oh good grief... did none of you CP geeks get the reference[^] ?
|
|
|
|
|
I don't think I understand the your requirements correctly, because to me it looks like any key <= x will be fine, and since the table is sorted, you could just take the first one and return an error if it is > x
|
|
|
|
|
Hi,
liquid_ wrote: I wonder whether there is a more compact algorithm or maybe more efficient
As you use size_t you are probably using a C++ compiler here.
Then if Key is any type with operator<(const Key& other) defined:
#include <algorithm>
std::min_element( base, base + nElts );
returns a const Key* pointing to the smallest key in the array.
cheers,
AR
When the wise (person) points at the moon the fool looks at the finger (Chinese proverb)
|
|
|
|
|
Thanks for constructive reply.
Firstly, I used size_t only for shortage of unsigned int . The rest of code is written in pure C and it should be just such - a more universal one.
Secondly, analyzing the std::min_element function it is clear that it has linear complexity. What I tried to aim is logarithmic complexity.
However, comparing my implementation of
lbound(const void* key, void *base, unsigned int nmemb, unsigned int size, int (*compare)(const void *e1, const void *e2)) with bsearch implementation, it seemed to me that my lbound is not so compact and efficient as bsearch standard implementation (two additional checks - size==0 and size==1 , and one more comparison after size==1 check).
Maybe its the matter of lower bound specific. bsearch just returns NULL if it does not find a key, lbound returns NULL only if key is lower than first element of table.
Regards
|
|
|
|
|
Hi,
liquid_ wrote: code is written in pure C and it should be just such - a more universal one. This is really planet-dependant
liquid_ wrote: What I tried to aim is logarithmic complexity.
If your data are unsorted you will have to check each and any for smaller than current minimum, so you will never get under linear complexity. If your data are sorted the problem is trivial.
More generally you should think that more man*hours have been spent on each part of the Standard C++ Library than you can provide in your total professional life.
Good luck,
AR
When the wise (person) points at the moon the fool looks at the finger (Chinese proverb)
|
|
|
|
|
The assumption is that data are sorted. I'd just like to know if I found this trivial solution.
Concerning the philosophy of using ready made STL or creating own solutions, it depends on particular needs. Many programs are written in C not in C++ because of various reasons (eg. UNIX or LINUX areas). I use pure C or C++ with STL or other available libraries (eg. BOOST, MFC, .NET aso) as well. Most of software I do is made just for fun or to search/find interesting solutions, or to resolve some problems I meet in my job. It is not my core business.
Anyway, thanks for your points.
Regards
|
|
|
|
|
...an extraordinary fact - probably known to half of you, but nevertheless I thought I'd share it. It's not an alogorithm, but never mind!
The square of any prime number >= 5 can be expressed as 1 + a multiple of 24
or
If p is prime, then p ^ 2 = 1 + 24 * n, for some n
Don't know about you, but I think that's quite amazing. Why is it true at all, and given that it is, where does 24 come from?! (Pity it's not 42 really!)
[edit] When I ask "where does 24 come from?" I am not asking for a mathermatical proof - I can look that up myself. It's more just a metaphysical musing...)
modified on Thursday, November 4, 2010 6:37 AM
|
|
|
|
|
what is that "n" it is 1 for p = 5??
|
|
|
|
|