|
Because the house edge in Keno is the among the highest in the casino.
Better to run over to the roulette wheel and put the same money on red or black. You'll still lose it, but at a much lower rate.
|
|
|
|
|
Ah, you're just not in the spirit of gambling!
Over time, all casino games (except Craps) favor the house. But people do get lucky and nail a large win before losing a like amount. That hope is what keeps the industry alive. I have personally hit 6 of 6 on Keno twice this year, which pays off about 360:1, within the first few minutes of play. From years of playing, and watching players, I've learned that only one strategy works - play for a short time, and if nothing hits, walk away. If you do get a big win early on, walk away with it. That last is the hardest, by the way. It's really tempting sometimes to think that the machine is on a winning streak and more will follow. It does happen, but not often enough to make it worth trying.
Will Rogers never met me.
|
|
|
|
|
I could come up with some formula's for red and black probabilities on the roulette...
|
|
|
|
|
Roger Wright wrote: Over time, all casino games (except Craps) favor the house.
In general all games in which the house pays the players favors the house.
Variations of this are because the way in which the odds were calculated was flawed or because the game itself is flawed.
Those variations very seldom occur. They often involve odd circumstances. And casual play will not reveal it because such play would quickly expose such problems as the house take would not be following the expected income.
|
|
|
|
|
Cannot validate it myself but have a 5 just for such a good looking answer
return 5;
|
|
|
|
|
This problem has been well worked over, if you google Keno Probability you can even find source code to work out the expected return over a range of bets.
From what I have seen though, all expected returns on $1 are less than $1, so on average you will come out in front if you leave the dollar in your pocket. Where I live these are simple government revenue raising schemes that prey on the vulnerable (somewhat ironically the same folks the government supports).
Peter
"Until the invention of the computer, the machine gun was the device that enabled humans to make the most mistakes in the smallest amount of time."
|
|
|
|
|
I agree, the probability of winning at gambling is always less than 50% (usually a lot less) otherwise there would be nowhere to gamble
return 5;
|
|
|
|
|
musefan wrote: always less than 50%
not quite. The way roulette works, they win when 0 gets drawn, as it isn't considered part of any combination (red, black, less, more, odd, even, etc), so for any 37 units you play, you can expect 36 to return and 1 to loose. That's about a 3% profit on all the stakes for every turn of the table, not too bad for the casino.
|
|
|
|
|
Actually, sometimes a casino relies on people's lack of knowledge. There are two games in which people who play correctly will come out ever so slightly ahead of the house. They are straight video poker and craps. Even so, people routinely go for the big bucks ahead of their odds and loose plenty of money to the house. In theory you can also get ahead of the odds in Blackjack if the house allows it. But they guard against card counting quite well in the large gaming centers.
In Keno, the best strategy is to not think you can beat the system. So don't play unless the enjoyment of playing is worth the cost of loosing every time you purchase. I don't see a huge difference between a person who spends say $20 at a theater for two hours of enjoyment and one who spends $20 playing Keno for a couple hours of enjoyment.
|
|
|
|
|
Kirk Wood wrote: I don't see a huge difference between a person who spends say $20 at a theater for two hours of enjoyment and one who spends $20 playing Keno for a couple hours of enjoyment.
Exactly. Playing Keno is about as interesting as watching a PC run while betting that an alpha particle will take out a bit in the RAM. The odds are similar, too. But my favorite lady loves the game, and luckily prefers to play $.05 at a time, so I can spend hours of "quality time" with her for the price of a movie ticket and popcorn. Once in a while, she hits a big one and it really brings a smile to her face! Last week she hit 6 out of 6 on my birthday, and that was in a bonus round when jackpots are doubled. A win of $300 on a bet of 5 nickels is nothing to sneeze at, and creates a lot of joy for a lady with too little of it in her life.
I knew about craps, but I don't play it as it is awfully fast and intimidating to me; I just don't understand the rules, I guess. But I didn't know that straight video poker has a slight edge for the player. I usually play the Double Double Bonus poker, because of the payout for a 4 of a kind with a 'kicker', even though I know the lower hands pay out less than normal. What would be an optimal selection strategy for normal video poker? For example, in DDB poker, pairs are preferred over a single face card, as a pair of face cards pays even money, but 3 of a kind pays 3 to 1. In Deuces Wild, I save 2s and pairs, because Jacks or better pays nothing. Is there a strategy that works best for the stright poker game?
Will Rogers never met me.
|
|
|
|
|
|
To get the probability of getting 3 numbers correct just take the total number of selected values divided by the total number of balls. This gives 3 divided by 80 , not 20.
The probability of getting 6 numbers correct is therefore 6/80.
To determine the probability of your lucky numbers appearing in any number of consecutive draws take this following example.
Whats the probability of 3 of your numbers appearing in four draws?
Its simply got by multiplying individual probabilities by each other. So the answer should be got by getting (3/80)cubed or raised to power three, ie, (3/80)^3
if you have the zeal and will, i can make you a more elaborate c# software showing the workings of probability in action free of charge but it will take a while.
|
|
|
|
|
Thanks for the kind offer, but no thank you. Your algorithm is seriously flawed, yielding a higher probability of getting 6 right than three. But I appreciate the offer!
Will Rogers never met me.
|
|
|
|
|
Hi,
I'm looking for Open Source Audio DSP Algorithms or Filters? (.e.g Delay, Reverb, Chorus, etc.)
LGPL/MIT/BSD are preferred, GPL is very limiting due to its viral/strict non-commercial nature.
The language is not critical - C/C++/Java/C# are all good.
If you know of such a good framework please let me know.
Thanks,
Yuval
"The true sign of intelligence is not knowledge but imagination." - Albert Einstein
|
|
|
|
|
Yuval Naveh wrote: Open Source Audio DSP Algorithms or Filters? (.e.g Delay, Reverb, Chorus, etc.)
Hi
See this Link[^]
|
|
|
|
|
Hi,
Thanks for the link
Yuval
"The true sign of intelligence is not knowledge but imagination." - Albert Einstein
|
|
|
|
|
I am wondering if there is a general formula to find the number of integer solutions to an equation in two variables? For example:
Given that x, y are both positive integers, find the number of distinct (x, y) pairs that yield a given value z in the following equation:
3x^2 + y^2 = z
Essentially, I am attempting to find an f(z) that yields the following values (the pairs x,y are included after the function value):
f( 1) = 0
f( 2) = 0
f( 3) = 0
f( 4) = 1 {(1, 1)}
f( 5) = 0
f( 6) = 0
f( 7) = 1 {(1, 2)}
f( 8) = 0
f( 9) = 0
f(10) = 0
f(11) = 0
f(12) = 1 {(1, 3)}
f(13) = 1 {(2, 1)}
...
f(28) = 3 {(1, 5), (2, 4), (3, 1)}
...
f(91) = 2 {(3, 8), (5, 4)}
... I don't care about finding the pairs that yield the solution, only the number of solutions (actually, I only care if the number of solutions is odd or even, if that helps). If there is such a function, what is the methodology of derivation? Thanks,
Sounds like somebody's got a case of the Mondays
-Jeff
|
|
|
|
|
Let's rewrite your equation as y2 = z - 3x2. Then y = sqrt(z - 3x2).
We can easily see that there are no solutions for z < 3. Now we can write the following code (is C code, but it's easy to port it to another language):
int f(int z)
{
int result = 0;
int x = 0;
double m = 0;
double y;
if (z < 3) return 0;
while (m <= z)
{
y = sqrt(z - m);
if (y == floor(y)) result++;
x++;
m = 3*x*x;
}
return result;
}
Note that this function has a complexity of O(z-2) as it checks for all possible integers values of x.
Also note that this implementation accept solutions where x and/or y are zero (e.g. f(4) returns 2 and the solutions found are { 0, 2 } and { 1 ,1 }). If you want solutions with x and y stricly positive you should initialize the loop with x = 1 and m = 3 , and inside the loop ignore solutions with y == 0 .
|
|
|
|
|
Hi Jeff,
In my experience formulas giving the number of solutions of some problem are rare in number theory (Diophantine equations). As an example, there are some formulas that estimate the number of primes in a given range, however all they give is an approximation, not a exact value.
So I'm afraid you need some loops, as Sauro suggested. You may organize it differently though: by calculating z from x and y, and storing all the results, you could build a large table much faster than you could analyze a large number of z values one by one; this is similar to the concept of the Sieve or Eratosthenes.
|
|
|
|
|
The code that I have implemented actually does exactly what you suggest. I have an outer loop for y and an inner loop for x, and just flip a boolean value in an array each time I find a solution (since I only need to know the number of solutions mod 2). However, the algorithm I am attempting to implement claims to have a better O() runtime than another algorithm I implemented to do the same thing, and I am just not seeing this claim play out in my implementation of the two algorithms. In fact, I am seeing just the opposite, and determining the number of solutions to this equation is really the only difference between the two. I guess I'll just have to look a bit harder at my loops to see if I can do anything more efficiently. Thanks,
Sounds like somebody's got a case of the Mondays
-Jeff
|
|
|
|
|
if you want to do a mapping approach such as Eratostenes over a huge range, applying a banding technique pays off big time. I investigated prime generation more than a year ago (still should finish the article!), and upped the performance more than 10 times by making sure the output mapped well into the level two cache. IIRC I am getting 20 million primes per second on a typical machine.
|
|
|
|
|
If you will check several values of Z per session, and memory usage is not a problem, you might store the work of the algorithm as you check for a value of Z. Errr... confusing, It's hard for me to explain this in English. Let me give you an example in C#:
public class Solutions
{
Dictionary<int, bool> matches = new Dictionary<int, bool>();
int maxZ = 0;
public bool? EvenSolutions(int z)
{
if (z > maxZ)
{
int maxY = (int)Math.Sqrt(z - 3);
int maxX = (int)(Math.Sqrt(z - 1 / 3d));
int n = 4;
for (int x = 1; x < maxX; x++)
{
for (int y = 1; y < maxY && (n = 3 * x * x + y * y) <= z; y++)
{
if (n > maxZ)
{
if (!matches.ContainsKey(n))
matches.Add(n, false);
else
matches[n] = !matches[n];
}
}
}
maxZ = z;
}
return (matches.ContainsKey(z)) ? (bool?)matches[z] : null;
}
}
The method returns true if there is a even number of solutions; false if the number of solutions is odd; and null if there are no solutions but, at the same time, it stores the possible values of Z which have solutions and are lesser than the given Z. So, if you are later checking for a smaller value of Z, you will already have it in matches dictionary and you will not have to repeat the process... and if it is not in the dictionary then it means that there weren't solutions for that value of Z.
|
|
|
|
|
Hi Jeff,
With VC2010 (or gcc 4.5) C++0x implementation:
#include <utility>
#include <array>
#include <algorithm>
template <unsigned int z>
size_t NumSolutions()
{
typedef std::pair<unsigned int, unsigned int> Solution;
std::array<Solution, z * z + 2 * z> All;
std::generate(All.begin(), All.end(),
[]()->Solution{
static unsigned int x = 0, y = 0;
return y == z ? Solution(++x, y = 0) : Solution(x , ++y);});
return std::count_if(All.begin(), All.end(),
[](const Solution& sol){return z == 3 * sol.first * sol.first + sol.second * sol.second;});
}
With previous C++03 compilers:
#include <utility>
#include <vector>
#include <algorithm>
typedef std::pair<unsigned int, unsigned int> Solution;
template <unsigned int z>
bool IsSolution(const Solution& sol)
{
return z == 3 * sol.first * sol.first + sol.second * sol.second;
}
template <unsigned int z>
Solution Next()
{
static unsigned int x = 0, y = 0;
return y == z ? Solution(++x, y = 0) : Solution(x , ++y);
}
template <unsigned int z>
size_t NumSolutions()
{
std::vector<Solution> All(z *z + 2 * z);
std::generate(All.begin(), All.end(), Next<z>);
return std::count_if(All.begin(), All.end(), IsSolution<z>);
}
You may test both with:
#include <iostream>
int main()
{
using std::cout;
using std::endl;
cout << "NumSolutions<4>(): " << NumSolutions<4>() << endl;
cout << "NumSolutions<7>(): " << NumSolutions<7>() << endl;
cout << "NumSolutions<28>(): " << NumSolutions<28>() << endl;
return 0;
}
cheers,
AR
Edited for correct size of All vector.
Edited for use of adequate std::array in C++0x version.
When the wise (person) points at the moon the fool looks at the finger (Chinese proverb)
modified on Friday, November 26, 2010 4:45 PM
|
|
|
|
|
Below, there is an algorithm which I wrote to find lower bound of the given key in a sorted table. In other words, it finds a key which is <= of a given one. I wonder whether there is a more compact algorithm or maybe more efficient.
#include <stddef.h>
static size_t blsize;
static const void *skey;
static int (*cmpf)(const void *e1, const void *e2);
static void *lbnd(const void *base, size_t size)
{
const char *p,*p1;
int c;
size_t n;
if(size==0) return NULL;
if(size==1)
{
c = (*cmpf)(skey,base);
if(c<0)
return NULL;
else
return (void *)base;
}
n = size >> 1;
p = (char *)base + n*blsize;
c = (*cmpf)(skey,p);
if(c==0) return (void *)p;
if(c<0)
{
return lbnd(base,n);
}
else
{
p1 = lbnd((char*)p + blsize,size - n - 1);
if(p1==NULL)
return (void *)p;
else
return (void *)p1;
}
}
void *lbound(const void *key, const void *base0, size_t nmemb, size_t size, int (*compar)(const void *e1, const void *e2))
{
blsize = size;
cmpf = compar;
skey = key;
return lbnd(base0,nmemb);
}
The static function lbnd does the search using static variables which contain constant values because lbnd is called recursively and to minimize code overhead in parameters passing.
|
|
|
|
|
“I think the bubble sort would be the wrong way to go.”
|
|
|
|
|