|
Company A owns 25% of company B and 90% of itself = 0.25 * 1M + 0.90 * 2M = 2.05 M
Company B owns 10% of company A and 75% of itself = 0.10 * 2M + 0.75 * 1M = 0.95 M
Which is in sum 3 M.
Everything else would be financial math
Greets
Matthias
|
|
|
|
|
Such questions or situations never exist. Though they seem to be.
The way you have suggested, it will never end into an equilibrium. In reality, A can only own a maximum of 100% of its own company. If 10% is owned by B, A looses 10% automatically. If you give me 10 USD, I am richer by 10 USD and you are poorer by 10 USD. It cannot happen that you keep your amount and I gain it too.
Now,
Once you accept this fact, this brings us to a different kind of situations. What you are suggesting here is that every time the value of A changes, the value of B changes as well. Similarly the same happens for B.
In any example, there always will be one finite value that would make a transition from object1 to object2. It is upto you to identify this thing.
Do you have any more such examples? Probably, its just a incorrect assumption.
|
|
|
|
|
Dear all,
I am a newbie here, this is my first post. I have looked over a few algorithms and have searched this board for my question but have not been able to narrow down to a clear answer.
I am looking for any pointers/code/advice to help decide how to solve the following problem.
I have to write a program to suggest the starting time of a job. There is 1 CPU, and there are n jobs already scheduled on it and we know their run times. I can assume a run time (maybe a median of the known run times or something) for this new job. The constraint being at no time should there be more than m jobs running in parallel.
I think this problem is relatively simple. For a hammer and tongs solution, I have implemented a quick-sort kinds binary cleaving program which starts with a random time of the day, say, 13:00, and checks if one more process can be started at that time, it accepts the "assumed" run time of the new process and finds if the number of jobs will become greater than m during the assumed run time of the new process. If not, success! else, I cleave the two sides of the time chosen 00:00-12:59 and 13:01-23:59 (of course I can add the assumed run time to 13:01 to make a better choice of the window/range) to get the midway time position and check there and so on.
Can there be a more formal solution to this problem? Like maybe based on the Knapsack algorithm or something.. any advice.
Also, what could be a good measure of the "assumed" run time, I'm thinking median. I do realize that this question is a bit vague.
Thanks in advance,
-J
|
|
|
|
|
Hi
I think I would go for a observer program that checks if there is one more job that can be started. If so I would start one.
If you want you can do a importance weighting schema based on the assumed run time.
I don't know if this answer your question. but this is the direction I would tend to go.
Cheers
You have the thought that modern physics just relay on assumptions, that somehow depends on a smile of a cat, which isn’t there.( Albert Einstein)
|
|
|
|
|
thebiggestbangtheory wrote: Also, what could be a good measure of the "assumed" run time, I'm thinking median.
I'd go for a progress-indication that's displayed in "% work done", instead of using a calculated guess based on the previous runtimes.
You could still calculate the time that's required to finish the job, based on the percentual feedback that you get
I are Troll
|
|
|
|
|
Why not just use a simple, "greedy" approach like an operating system, and start jobs as they come in, until m jobs are running. Then when one finishes, start another one?
A few nuances:
1. If a job can start other jobs before it can complete, you risk a deadlock situation where two jobs are each waiting for the other to complete before they can start other jobs.
If you know in advance how many other jobs a job can start, a conservative approach is the Banker's Algorithm: If a job will start k other jobs, don't start it if more than m - (k + 1) jobs are currently running. This will guarantee you have the resources needed to finish the job.
2. Starting short jobs before longer jobs will increase throughput. (But if you give shorter jobs too much priority, some long jobs may never run. This was observed in some early operating systems.)
|
|
|
|
|
For Shopping Cards i want to Generate Unique Serial Number for every card(countless no of cards) what is the solution plz help me
|
|
|
|
|
They need to have an order or they must be "random"?
They have a size limit?
One of the possible solutions is to use GUID.
When GUID is not possible, I generally use a long value, that is get with DateTime.Ticks.Now (or incremented randomically) if both calls are in the same microsecond.
The class code follows, but you only need to call:
TicksOrIncrement.GetValue() to use it.
using System;
namespace Pfz.Threading
{
public static class TicksOrIncrement
{
private static readonly object fLock = new object();
private static long fLastValue = DateTime.Now.Ticks;
private static readonly Random fRandom = new Random();
public static long GetNext()
{
long ticks = DateTime.Now.Ticks;
lock(fLock)
{
if (ticks <= fLastValue)
ticks = fLastValue + 1 + fRandom.Next(1000);
fLastValue = ticks;
}
return ticks;
}
}
}
|
|
|
|
|
Yes Lenght should b 14 and i want randoms coz in sequence there is risk of pattern matching
|
|
|
|
|
Since you want randomness and no duplicates, do the following. Generate a list of all consecutive numbers m to n (where m is a 15 digit number, and you may want to ignore any number that has duplicate adjacent characters, i.e. 4672950748223546). Set Count to n-m-1. Using any "good" RNG (which still could have duplicates), generate a random number x between 0 and Count, and use x as an index to select a number from the list m to n. Save the selected number in the serial number list, then remove it from the m to n list and decrement Count. Repeat n-m-1 times. Caution: This can be very slow if you have a massive list (the serial number list grows linearly - write each developed number to a file, but shrinking the massive list is slow). If you are interested, I can dig out a fast C implementation with multiple arrays that I used for randomly selecting words from a dictionary (where the list was a list of file offsets to the dictionary words).
Dave.
|
|
|
|
|
|
GUIDs are rather long (128 bits--32 hex digits). If one had a good 40-bit random number generator, one could generate roughly 2^17 (130,000) items without the risk of even a single collision becoming very significant). The output from a 40-bit generator could be converted to a 12-digit number, and possibly merged with a header and/or checksum to yield a 16-digit card number (a common size).
BTW, does anyone know whether putting a GUID through an MD5 or similar hash function and grabbing the first few bytes of the result would be a good way to generate 'random' numbers of e.g. 32-96 bits that are not supposed to duplicate more often than purely random numbers would?
|
|
|
|
|
I have this binary heap, and I need to get the index of a particular node. I can do that now by iterating through the array one by one until I find it, its too slow. Are there any faster ways to search? Whats the fastest?
Here is my code:
class NodeHeap
{
private Node[] m_nodeHeap = new Node[1000000];
private uint m_lastNodeIndex;
public long Count
{
get { return m_nodeHeap.LongLength - 1; }
}
public Node NodeAt(uint index)
{
return m_nodeHeap[index];
}
public void RemoveAt(uint index)
{
if (index > m_lastNodeIndex)
{
throw new ArgumentOutOfRangeException("uint index");
}
SwapNode(ref m_nodeHeap[index], ref m_nodeHeap[m_lastNodeIndex]);
m_nodeHeap[m_lastNodeIndex--] = null;
SiftUp(index);
}
public uint IndexOf(Node n)
{
for (uint i = 1; i <= m_lastNodeIndex; i++)
{
if (m_nodeHeap[i] == n) return i;
}
return 0;
}
public void Add(Node n)
{
m_nodeHeap[++m_lastNodeIndex] = n;
SiftDown(m_lastNodeIndex);
}
public void SiftUp(uint index)
{
if (index >= m_lastNodeIndex) return;
uint next = index;
next *= 2;
if (next > m_lastNodeIndex) return;
if (m_nodeHeap[next].Cost < m_nodeHeap[index].Cost)
{
if (next != m_lastNodeIndex && m_nodeHeap[next + 1].Cost < m_nodeHeap[next].Cost)
{
SwapNode(ref m_nodeHeap[++next], ref m_nodeHeap[index]);
}
else
{
SwapNode(ref m_nodeHeap[next], ref m_nodeHeap[index]);
}
SiftUp(next);
}
}
public void SiftDown(uint index)
{
if (index == 1) return;
uint next = index;
next /= 2;
if (m_nodeHeap[next].Cost > m_nodeHeap[index].Cost)
{
SwapNode(ref m_nodeHeap[next], ref m_nodeHeap[index]);
SiftDown(next);
}
}
public Node RemoveFirst()
{
Node n = m_nodeHeap[1];
RemoveAt(1);
return n;
}
private void SwapNode(ref Node a, ref Node b)
{
Node temp = a;
a = b;
b = temp;
}
}
|
|
|
|
|
Btw, your swap work "oddly", I would expect to swap by index and avoid ref hacks, but w/e.
To search faster, use your heap property. Meaning that, if this is a min heap (looks like one), see if your value is smaller than the current root, and if so, return and go the other way instead (recursively). Alternatively, do the test before recursing, to save method calls.
(btw, are you going to do A* with it? Heap is a good choice for A* IME)
|
|
|
|
|
harold aptroot wrote: Btw, your swap work "oddly", I would expect to swap by index and avoid ref hacks, but w/e.
Sounds like a nice modification.
harold aptroot wrote: see if your value is smaller than the current root, and if so, return and go the other way instead (recursively)
I'm not quite sure what you mean. Are you saying bounds back and forth between the first and last, then second to first, then second to last, ect...?
harold aptroot wrote: (btw, are you going to do A* with it? Heap is a good choice for A* IME)
Yes, and right now it takes too long to find the path in a complex map like a spiral @ with the endnode inside of it.
|
|
|
|
|
CaptainSeeSharp wrote: I'm not quite sure what you mean. Are you saying bounds back and forth between the first and last, then second to first, then second to last, ect...?
Not sure what you mean here.. I mean, start at the root, if the value you're searching is smaller than the root then clearly the item is not in the heap - this applies recursively to every sub-heap as well
OTOH, I didn't need that feature (searching by value) in my A* implementation. Why do you need it?
|
|
|
|
|
harold aptroot wrote: I didn't need that feature (searching by value) in my A* implementation. Why do you need it?
I would really like to see how to make mine not need it. When mine gets the nodes around it, it checks to see if they are on the open list (that is where IndexOf is called), and if they are not then it will add whatever nodes not on the open list.
|
|
|
|
|
But surely your heap is sorted by F cost? And if you find a node a second time, the G (and therefore the F) will probably be different since you would have reached it from an other node (since the one you reached it from the first time should be in your closed list)
I used a sparse "grid of grids"-of bits to test membership of open and closed lists (closed "list" was only that grid and no list), a normal grid would be easier but I had huge maps where paths would "usually" stay in roughly 1 "area", which meant that usually only up to 4 entries of the "supergrid" would be used. The whole thing was just a BitArray[,] btw. The "supertile"-size should be a power of 2 and the area divisible by 32 to be efficient this way, like 16x16, or 32x32. Testing membership of open or closed list is then only O(1), at the expense of .. not even all that much memory (just 1 bit per tile and some overhead, and usually only for a subset of tiles).
|
|
|
|
|
I think I get what you are saying. Use a grid (that is the same size of the map) of bits or bools where a value of 1 or true in a particular X,Y coordinate of the bitarray[,] means that it is in the open list? I don't use a closed list by the way, and it works just as well.
|
|
|
|
|
Hello! I have a problem to make a detection, I need to locate multiple objects (people) was planning to use the algorithm of Viola and Jones extended version of Lienhart (included in OpenCV), can I make a massive detection (crowd, each one people) of objects with these algorithms? (I know it is mainly used for face detection) or exist another solution whithin OpenCV?
Can anyone help me? I no need a source code, just help me to choose an algorithm to do this please!
|
|
|
|
|
There's an implementation by Irshad Ali & Mathew of people detection using Viola Jones and tracking using particle filter. They have created a Haar cascade of heads to deteck humans.
They have used particle filter implementation by Rob Hess.
All the links can be found in my webpage.
https://sites.google.com/site/learningcomputervision/particle_filter[^]
I have corrected some bugs in Irshad's code and added the modified files as attachment in the webpage.
Regards,
Kaushik
|
|
|
|
|
I'm a newbie and would appreciate it if someone could start me off and guide to on how how can you convert algorithm to C# (probability distribution in Wireless LAN)
Algorithm 1 x= Corr GetLocation (n, S, X, P RM)
Input:
n : Number of samples from each access point.
S : Measured signal strength vectors from k access points (S =(s1, ..., sk)).
Each si, 1 ≤ i ≤ k is a vector containing n samples from access point i.
X : Radio map locations.
RM : Radio map, whereRM[a][x](q) represents the pdf of the average of n signal
strength samples from access point a at location x ∈ X.
Output:
The location x ∈ X that maximizes P(x/S).
1: for i = 1..k do
2: Avg(i) ← average(si)
3: end for
4: Max ← 0
5: for l ∈ X do
6: P ←k
i=1
Avg(
i)+0.5
Avg(i)−0.5
RM[i][x](q) dq
7: if P > Max then
8: x ← l
9: Max ← p
10: end if
11: end for
modified on Wednesday, November 11, 2009 1:28 PM
|
|
|
|
|
This is more a programming question than an algorithm question.
What languages do you know?
Silver member by constant and unflinching longevity.
|
|
|
|
|
Just a brief question. In a little demo I've got going I can have up to 300 objects floating around the screen and checking every object against every other object is painfully slow.
Using a quad tree to split the screen up and group the objects greatly speeds up checking for collisions but unfortunately rebuilding the quad tree every frame eats up quite a lot of the performance that you gain from using it.
Does anybody have any idea's on what I could do, bearing in mind that all 300 objects will constantly be moving and are all fairly close together, so trying to only update objects that have moved would cause the entire tree to be jiggled about a little bit, perhaps costing more time that a full rebuild.
My current favourite word is: Sammidge!
-SK Genius
Game Programming articles start - here[ ^]-
|
|
|
|
|
why use a tree that is bound to become invalid all the time?
I would see the screen as a 2D matrix (say N*N) with N around typical screen size divided by typical object size (and not larger than say 10), so each object typically spans two rows and two columns. Each screen cell could be represented by a List containing references to the objects that (partially) lie inside it, and a timestamp.
One time step would:
- move some objects, checking them against the 2D matrix cells; resulting in possibly removing them from some lists, and possibly adding them to some lists; store current time in cells objects got added to.
- then foreach cell marked with current time, check the listed objects for overlap.
Possible optimization: give each object a list of cells it currently is in; this would speed up the removal step.
Luc Pattyn
I only read code that is properly indented, and rendered in a non-proportional font; hint: use PRE tags in forum messages
|
|
|
|
|