|
Okay, so I was guessing...
But seriously, it's only one part of the equation. On the pair of sides showing 6 and 1 pip, one can consider that a shift in the center of mass occurs equal to some distance proportional to the ratio of the mass of one pip to the entire die in the direction of the face with 1 pip. Call that distance (6-1)l = 5l, and its direction i .On an adjacent face, the 5 counters the 3 for a distance of (5-3)l = 2l in the direction j. Normal to the plane formed thus is the pair 4 and 3, at a distance l in the k direction. The total distance by which the centroid shifts is then sqrt(25+4+1)*l = 5.477*l. Its direction is left as an exercise for the student, but it is decidedly not toward the face showing a 1. Remember, too, that we can't just rely on the measure of missing plastic in the pips, but must add back the mass of the paint used to mark each, and that might have a specific gravity much higher or lower than the base material.
For practical purposes, the amount shift in the center of mass is negligible compared to the random variations in surface texture of the felt on the table, and the influence of random air currents from breathing, talking, air conditioning, passers by, and the occasinal fart.
The outcome is close enough to random for any engineering use, though a mathematician might argue the point.
"A Journey of a Thousand Rest Stops Begins with a Single Movement"
|
|
|
|
|
Roger Wright wrote: For practical purposes, the amount shift in the center of mass is negligible compared to the random variations in surface texture of the felt on the table, and the influence of random air currents from breathing, talking, air conditioning, passers by, and the occasinal fart.
Hey, you're definitely cheating: my (pseudo ) random device rolls in a vacuum box!
If the Lord God Almighty had consulted me before embarking upon the Creation, I would have recommended something simpler.
-- Alfonso the Wise, 13th Century King of Castile.
This is going on my arrogant assumptions. You may have a superb reason why I'm completely wrong.
-- Iain Clarke
[My articles]
|
|
|
|
|
Well then, I guess the only way to resolve the matter is for you to build your machine, then run it through 6^(6^6) iterations and see if the Universe ends.
"A Journey of a Thousand Rest Stops Begins with a Single Movement"
|
|
|
|
|
CPallini wrote: See also here [^].
im sorry for being somewhat dumb.. but can you explain the picture? thanks..
regards
|
|
|
|
|
Well it is a (pseudo: see David Crow's post...) random number generator device.
If the Lord God Almighty had consulted me before embarking upon the Creation, I would have recommended something simpler.
-- Alfonso the Wise, 13th Century King of Castile.
This is going on my arrogant assumptions. You may have a superb reason why I'm completely wrong.
-- Iain Clarke
[My articles]
|
|
|
|
|
LOL. It is a pratical random generator. I used to write numbers on my square eraser and toss them to decide my answer for Multiple Choice Questions during tests that I didnt study.
|
|
|
|
|
I obtained better results even for tests I did study.
If the Lord God Almighty had consulted me before embarking upon the Creation, I would have recommended something simpler.
-- Alfonso the Wise, 13th Century King of Castile.
This is going on my arrogant assumptions. You may have a superb reason why I'm completely wrong.
-- Iain Clarke
[My articles]
modified on Tuesday, September 16, 2008 11:42 AM
|
|
|
|
|
|
I'm using a scanner to create my tokens and a parser to create the tokens into a meaningful AST.
After a good start on my project, I noticed that if i made my scanner create generalized tokens my parser logic needed more work but if i create more specific tokens my parser logic was greatly reduced.
anyone have a rule of thumb(s) when i should put the responsibility on the scanner or when it should be placed on the parser?
-lm
|
|
|
|
|
I can only speak from a compiler-creation point of view but scanners (lexical analyzers) are generally pretty simple... much simpler than the parser-portion of the process. My "rule of thumb" is that the scanner/lexical analyzer (lexer) scans the input file, breaks it into tokens, and identifies the type of token. Done. It knows nothing about syntax. That is where the parser takes over.
Maybe this is just a matter of semantics, but if your parser is taking up too much responsibility, rather than leaning on the scanner to provide more information, maybe you can break the parser down into more parts and divide the responsibility that way (again, speaking from a compiler point of view):
1. Scanner (Lexical Analysis) - Break your source code down into small tokens.
2. Parsing (Syntax Analysis) - Check for correct syntax and build your abstract syntax tree (AST). The parser checks strictly for syntactic correctness and stops there.
3. Tree Analysis (don't know the "real" name for this subtask) - Analyze and add information to your your syntax tree for semantic correctness (i.e. variables declared, initialize to default values, etc).
4. Optimization / Generation - What this is depends greatly on what your specific task is.
Enjoy,
Robert C. Cartaino
|
|
|
|
|
so how small should the scanner make the tokens?
Example:
i can create greedy tokens like this:
DCode:= 'D'[0-9][0-9][0-9]
GCode:= 'G'[0-9][0-9]
MCode:= 'M'0[0-2]
or specific tokens like this:
GCodeG36 := 'G36'
GCodeG37 := 'G37'
after working some on my parser this morning i like the more specific expression better. Simplifies my parsing.
my parser can just check:
"is a type"
matches &= scanner.Next() is DCodeD02Token
vs
"is a type with a value of this"
matches &= scanner.Next() is DCodeToken && ((DCodeToken)scanner.Current).DCode == 2;
I would like to see an example of tree analysis. might help me with my current problem of:
http://www.codeplex.com/irony/Thread/View.aspx?ThreadId=35310[^]
|
|
|
|
|
Umm, I'd like to start by admitting that I have a learning disorder in Mathematics. I was wondering however if anyone knew of a way to make or know of a application that would allow someone to change an equation out of a physics textbook or on a chalk board to a linear form used in a programming language (everything on one line). I realize this is a crazy question to ask but I'm trying to learn a language called Dark Basic and I keep looking in phyics books but it has everything in a longer form that's not compatible. Please anyones reply would be appreciated.
|
|
|
|
|
It's not a crazy question, but it is indecipherable. Could you provide an example? Are you referring to condensing multiple equations into a single line? Many languages allow you to place multiple statements on a single line, separated by a special character - the ';' is one that is used.
As for learning disabilities, I wouldn't be too sure. I taught many students who were told they would never learn math by previous instructors. All of them learned math in my class. More often it is a teaching disability that prevents students from learning it.
"A Journey of a Thousand Rest Stops Begins with a Single Movement"
|
|
|
|
|
Please elaborate: for instance one usually uses linear approximations to the solutions of physical equations...
If the Lord God Almighty had consulted me before embarking upon the Creation, I would have recommended something simpler.
-- Alfonso the Wise, 13th Century King of Castile.
This is going on my arrogant assumptions. You may have a superb reason why I'm completely wrong.
-- Iain Clarke
[My articles]
|
|
|
|
|
Dale Keller wrote: I realize this is a crazy question to ask
Actually, it is indecipherable as Roger has said.
It might be helpful to know of any ideas you have thought up.
"The clue train passed his station without stopping." - John Simmons / outlaw programmer
"Real programmers just throw a bunch of 1s and 0s at the computer to see what sticks" - Pete O'Hanlon
"Not only do you continue to babble nonsense, you can't even correctly remember the nonsense you babbled just minutes ago." - Rob Graham
|
|
|
|
|
I'm having trouble understanding what you need too. Physics equations are usually on one line, while a programming-language implementation often uses several assignment statements.
The rules for transforming equations are pretty straightforward, and with some practice can be mechanically applied.
|
|
|
|
|
Sorry for the crossposting[^] but I'm not quite sure where to post this.
Rather than repeat and retype all of this again, could you please just refer to the referenced post. If you believe the post belongs here, please let me know and I'll move it.
TIA
|
|
|
|
|
I'd say this is the place more appropriate for your question but considering how specialized it is, you'll probably get few takers.
I'm not entirely sure what your ultimate question is from the quick read I gave it, I'm still waking up. However, it sounds like peak detection and separation. A place to look for more software algorithms to handle this is in some of the process industry instrumentation areas. The one I know for sure that has to do peak analysis is chromatography for chemical analysis. The only contact I had with that particular part of the software was trying to port the peak analysis from 16 to 32 bit and C to C++. It was a godawful mess. I also seem to remember seeing a software library advertised that was for general peak fitting and analysis so it has to be somewhat common in instrumentation.
If you don't have the data, you're just another a**hole with an opinion.
|
|
|
|
|
Thanks for your suggestion, Tim. I'm looking into it and it looks like it could work but so far all I've found is plot programs that do this and no source code or discussion on the algorithms except for mentioning which ones can be used.
I'll keep looking.
|
|
|
|
|
I've read your posts and I don't have a clear idea of what you are trying to do. It looks like you have samples of an analgue radar reflection and, with knowledge of the transmit pulse shape, you want to determine the number and locations of the reflected pulses. If that is the case then you are really looking at standard radar signal processing and there is a substantial literature on the topic.
The two main traditional performance goals would be the ability to detect weak echoes in the presence of noise and the ability to resolve echoes close together. If you need good performance in either of these areas then I would strongly advise you to read up on estimation theory (or get advice in this area) before you settle on your algorithms. Simply trying to mimic what you would do manually will be far from optimal.
The sort of ways that you pose the question mathematically is "If I assume that this waveform consists of N pulses plus random noise, what are the locations and amplitudes of the pulses?".
Another way you can approach the problem is as a system identification problem. Here you say that you transmit waveform s(t) which passes through a filter with impulse response h(t) and you receive waveform r(t) which includes noise as well. Given r(t) and s(t), determine h(t). This can be solved by a least squares approach, but you need care to reduce the sensitivity to noise. In the absence of noise most heuristic algorithms work reasonably well where you simply write r(t) as the weighted sum of delayed versions of s(t) and have some way of optimising the weights and delays.
You really need to settle on your algorithms and know that they will work before you start coding. Typically people would simulate their algorithms in something like Matlab, where they can create samples of signals with various combinations of pulses and noise and test their algorithm's performance.
Peter
"Until the invention of the computer, the machine gun was the device that enabled humans to make the most mistakes in the smallest amount of time."
|
|
|
|
|
I'm sorry if I wasn't clear, I've written so much each time I just tried to abbreviate it. No, I'm receiving beacon radar reply pulses from aircraft transponders. Additionally, though beacon radar transmits (interrogates) then turns off the transmitter and turns on the receiver, I am not an integral part of the radar system, primary (search) or secondary (beacon). So, they are not reflections that I am processing as in a primary radar, but responses or replies to interrogations to a secondary radar system.
I am not connected to the radar system and am simply receiving beacon radar replies, considered the downlink, from aircraft. The possibility exists that the system could be adapted to listen to the uplink (the radar transmitting interrogations) and if we did, that would simplify my processing significantly especially for Mode S data since the plane being interrogated would have its unique ID as part of the message and I would know what plane would be responding even if the receipt envelope was slightly corrupted. However, the added cost and development time makes it highly unlikely that this will be done.
Besides MODE-S data there can be Air Traffic Control Radar Beacon System (ATCRBS) replies (the older system prior to Mode-S which mostly general aviation uses except that Mode-S equipped aircraft will respond to ATCRBS interrogations) and finally there are ADS-B replies. All of these replies can be interleaved or overlapped.
Though, I am not totally familiar with all of the noise filtering methods, I have played around with them using D-Plot on the data and have used Least Means Squared, cubic spline, and others and have found while they do have some positive effect, the effect is not better than simply averaging or even decimating the samples. The other methods seem to smooth the data but don't reduce the noise level at the floor or suppress noise spikes. And the tried methods have very little effect on the noise levels on our legacy receiver. A new receiver has been designed and from what I'm told, requires more amplification, shoots up the leading edge much faster but has much more noise. Thus each receiver is different and the design will change.
Keep in mind that I am only over there for a short period of time on loan and that ideally, most of this would have been done in hardware, but there are other considerations as well such as cost. What they have now works for Mode S data using a very simple method, but they don't process anything else and they have problems processing overlapped pulse trains especially at low signal levels because of noise.
Right now, I am looking into the peak fitting suggestion above and its going slow but I will post back my final progress later.
|
|
|
|
|
OK I think I now understand that you are trying to decode things like the Pulse Position Modulation on the Mode-S reply.
JohnnyG wrote: A new receiver has been designed and from what I'm told, requires more amplification, shoots up the leading edge much faster but has much more noise
Your new receiver simply has more bandwidth, this doesn't matter, the algorithm will reduce the noise bandwidth. More noise coming in does make it more important to get the algorithm right.
JohnnyG wrote: ideally, most of this would have been done in hardware
It doesn't matter where it is done, the algorithm is important.
Simple data smoothing and peak detection is not going to give anywhere near optimal performance in noise.
For what it is worth, my suggestion would be to run some sort of maximum likelihood detector to determine the timing synchronisation (i.e. where the pulse edges are), in it's simplest form this would use the energy in all four preamble pulses (to reduce the effect of noise) and the processing would be quite simple. Determining the timing is the key, once you have this you can process the message, in each bit period integrate the total power in each half and compare the two, outputting a '1' if there is more energy in the first half. For improved noise performance you could look at running a decision directed timing loop to improve timing detection from subsequent bits, or even do a joint timing and data estimation. I don't know what sort of performance you need or what sort of processing you have. Determining the timing using a detector based on all four preamble pulses will vastly out perform any data smoothing / peak detector approach, and would have very simple processing.
If you decimate the samples to reduce the noise you introduce timing errors as you can't guarantee that your decimation coincides with the signal timing, and if you run averaging you are doing much more processing than you need and you still have to work out which average to use - i.e. determine the timing.
Good Luck!
Peter
"Until the invention of the computer, the machine gun was the device that enabled humans to make the most mistakes in the smallest amount of time."
|
|
|
|
|
cp9876 wrote: you could look at running a decision directed timing loop to improve timing detection from subsequent bits, or even do a joint timing and data estimation.
Wow! You said a whole lot of what I don't know. I may be in over my head. A quick search on the Internets via google shows a whole lot of formulas for s(t), etc. I don't think I have the math background to do all of that.
And even though I am being self deprecating, its encouraging to me that many programmers (and hardware engineers) I've approached about what I'm doing, either have no idea of what I'm talking about or if they do, have no idea on how to do it. Your knowledge of these techniques is admirable.
I sure could use a helper.
As far as decimating and averaging, I meant that I just tried that in the plot program to see their outputs. In reality, the legacy program simply skips 25 samples at a time through the data and I do the same except I refine the sample size as I get closer to the downward slope or unexpected changes in direction.
I don't think I have a problem detecting leading edges or even plateaus. Based on what you've said and what one friend suggested, since I do know the timing constraints for the preamble and for ATCRBS messages (framing pulses spaced 20.3 us apart), I can find a leading edge and then jump, for example, ~ 20.3 us ahead in the data and look for another leading edge. Of course, then refining for pulse amplitude and width, I can pass the entire pulse train to another processor to process and/or look for overlapped pulses before doing that.
I need to process more then just Mode-S data and the existing legacy software simply looks for a preamble and then processes the entire 56 usec envelope as Mode S. Your suggestions are very helpful though but I'm not sure I'm capable.
Sigh.
|
|
|
|
|
This is a shot in the dark since I'm not sure exactly what you're doing (it's probably over my head).
You want to detect pulses that have a range in amplitude of [1, 255]? There can be pulses with various amplitudes that overlap at the same time?
Could a fourier analysis help you tease apart the pulse waveforms? Are the pulses periodic in any way?
Would an envelope follower help you track the pulses?
Anyway, just throwing stuff out hoping it helps. Good luck.
|
|
|
|
|
Leslie Sanford wrote: This is a shot in the dark since I'm not sure exactly what you're doing (it's probably over my head).
You want to detect pulses that have a range in amplitude of [1, 255]? There can be pulses with various amplitudes that overlap at the same time?
Could a fourier analysis help you tease apart the pulse waveforms? Are the pulses periodic in any way?
Would an envelope follower help you track the pulses?
Anyway, just throwing stuff out hoping it helps. Good luck.
I was thinking Fourier, or perhaps even a discrete wavelet transform.
...that mortally intolerable truth; that all deep, earnest thinking is but the intrepid effort of the soul to keep the open independence of her sea; while the wildest winds of heaven and earth conspire to cast her on the treacherous, slavish shore.
|
|
|
|
|