|
It looks a fairly good design to me.
If you cannot speed up the frame processing, then you might consider dropping some of them (if it is a viable option).
|
|
|
|
|
If he produces three for every one he processes, I'd imagine he's going to have to figure out a way how to process them faster or drops will have to occur.
|
|
|
|
|
Unless memory is enough to buffer all the needed data.
|
|
|
|
|
Assuming it's not a process that goes on forever. Usually with sensors, it's an ongoing process, they're always producing data, so if you're not using it all you have to do some sort of smart data reduction (i.e. drop if it makes sense, decimate if it makes sense).
|
|
|
|
|
Only thing that isn't clear is whether you need a "reader thread", implying you'll read all sensors in series one after the other. This can be optimized if the read operations can be done in parallel. It's really application specific so I couldn't tell you if you can or can't do that.
On the processing side, if each frame is identical and processes similarly, that can processed using a thread pool scheme, where you have a set of worker threads that process data when available. That works really well in cases where the processing required on data is identical (i.e. the work function is the same, but you can have multiple independent threads working in parallel on independent data). Again, parallel processing here is application dependent.
|
|
|
|
|
The idea with thread pooling sounds very interesting. I'm not sure how applicable it is though.
I should have provided a little bit more specific information. Apologies.
The other sensors communicate with the PC via Bluetooth asynchronously. Each of them sends a couple-of-bytes-long data packet. All work at rougly the same speed. The pakets arrive in random order. It's not a problem as long as all the most recent packets are received.
In terms of the Kinect, I use almost all streams except for sound and color.
The idea is that once all of the samples have arrived, including the multi-source frame from Kinect, their respective readers would fire an event. Once the WaitForMultipleObjects() function sees that all the expected events have fired, it unblocks and the data is copied into a custom frame class before being pushed onto the FIFO.
On the consumer side, the things look a little bit more interesting. I can't afford to drop any frames from the Kinect. One of the heaviest tasks that need to be carried out is to run the Kinect Fusion algorithm. It runs best on the GPU. I am not sure if this task can be parallelized on a standard PC. Fusion runs way slowlier on the CPU. Maybe it would be possible to run two instances of the Fusion, one on GPU and the other on CPU, but I don't know how much sense it would make.
Obviously, one of the bottlenecks is the throughput of the given GPU.
I'm trying to develop this program in such a way that its performance would vary depending on PCs specifications, in particular the GPU and RAM. Poorer machines would process slowly whereas better ones would give up to real-time performance. Some of the top gaming PCs can run Fusion at Kinect's fps.
From what I can see, the consumer side would seem to work out best to as a straight serial operation. Basically, it would be something like:
1. Pop frame from FIFO
2. Preprocess it (include other not time-consuming processing)
3. Pass the frame to Fusion.
4. Loop back to 1. if not complete.
I hope it's a bit clearer now what kind of an application it is and what sort of requirements it would have.
I am not an experienced professional coder . I just use common sense. The best structure to this program that I was able to come up with was the one described in previous posts. I don't seem to see a better way of structuring it. I greatly appreciate your input guys. I look forward to seeing more opinions, suggestions etc.
Thanks,
MW
|
|
|
|
|
Member 11703498 wrote: I can't afford to drop any frames from the Kinect
I don't know how you can say this and also say that you're producing data faster than you can process it. You HAVE to decimate if you're not keeping up. You'll be dropping data even if you don't want to once your queue is full. Best to deal with that some way or another so the results are predictable.
Member 11703498 wrote: It runs best on the GPU. I am not sure if this task can be parallelized on a standard PC. Fusion runs way slowlier on the CPU. Maybe it would be possible to run two instances of the Fusion, one on GPU and the other on CPU, but I don't know how much sense it would make.
GPU's are powerful because they can run real-time and use parallelism well, make sure you're taking advantage of that.
|
|
|
|
|
Albert Holguin wrote: Member 11703498 wrote: I can't afford to drop any frames from the Kinect
I don't know how you can say this and also say that you're producing data faster than you can process it. You HAVE to decimate if you're not keeping up. You'll be dropping data even if you don't want to once your queue is full. Best to deal with that some way or another so the results are predictable.
Not necessarily.
Dropping frames is not a good idea when they are used by the Kinect Fusion algoritm. This algorithm simply fails if the consecutive frames supply data that differs too much from frame to frame. This typically happens when frames are dropped or the Color stream causes Kinect's frame rate to drop to approx 30fps/2 (it's just a feature, or downside one should say, of this particular sensor).
The queue won't fill up. The system is required to have enough RAM available to the program. Also, the task should complete with a certain amount of frames stored in the queue. The nature of the application is to do a one off job, not a continuous one heavy lifting. If the task fails for whatever reason, say the amount of acquired frames was insufficient, or something along those lines, then the task can be repeated.
I am not that proficient at parallelizing stuff on GPUs . Fom what I've seen, the Fusion algorithm utilizes the GPUs resources to the max. A good GPU would actually make the Queue and high RAM requirement redundant. For the time being though, the best approach I can see is to stick to the use of the Queue and large RAM.
|
|
|
|
|
Hi community,
i use this example from codeproject:
Creating your first Windows application[^]
to learn more about SDI Programing, and later about MDI.
Now i try to add a CListCtrl to show some Data in the list, but where to initialize a CListCtrl as a member?
In my class derived from CView, or something else?
In Dialogbased Application i know how to do this, but not here
Or is here a example to learn more about SDI and MDI programing?
Best regards,thanks for any help
Arrin
|
|
|
|
|
Using the MFC Application Wizard to create the project, you opted for Single Document, correct? On the Generated Classes screen, you could have CListView as the base class for the view.
"One man's wage rise is another man's price increase." - Harold Wilson
"Fireproof doesn't mean the fire will never come. It means when the fire comes that you will be able to withstand it." - Michael Simmons
"You can easily judge the character of a man by how he treats those who can do nothing for him." - James D. Miles
|
|
|
|
|
Hi,
thank you very much, it works , my first listview
Now i can extend my App an learn more
Best regards
Arrin.
|
|
|
|
|
Academic question.
<b>Which do you prefer</b> - using preprocessor directive #define to replace case state - example "case 0:" with #define NORTH 0 and code "case NORTH:" or use enum?
Here is a sample code using enum:
Enumerated variables have a natural partner in the switch statement, as in the following code example.
#include <stdio.h>
enum compass_direction
{
north,
east,
south,
west
};
enum compass_direction get_direction()
{
return south;
}
/* To shorten example, not using argp */
int main ()
{
enum compass_direction my_direction;
puts ("Which way are you going?");
my_direction = get_direction();
switch (my_direction)
{
case north:
puts("North? Say hello to the polar bears!");
break;
case south:
puts("South? Say hello to Tux the penguin!");
break;
case east:
puts("If you go far enough east, you'll be west!");
break;
case west:
puts("If you go far enough west, you'll be east!");
break;
}
return 0;
}
|
|
|
|
|
|
As already answered, enum is better.
One advantage is that modern compilers can throw a warning when not all enums are handled by the switch statement. So you will be noticed when you forgot to handle a newly added definition.
|
|
|
|
|
enum......
Preprocessor directives can make troubleshooting and reading your code tricky, not worth going down that route unless you don't have alternatives, in this case, you have a perfectly viable solution with an enum.
|
|
|
|
|
<pre lang="Thanks for all replies, I have implemented basic code in SAM3x8E processor - Arduino Due and it works as expected
Thanks
// test input assigment array
#define MAX_INPUT 32
int Input[MAX_INPUT] = {42, 2, 3, 4, 6, 7, 8}; // TODO assign real
volatile byte bInputStateArray[MAX_INPUT]; // state machine array
// test using enum
enum State
{
LED_on,
LED_off,
TEST
};
volatile enum State State_Array[MAX_INPUT];
do
{
switch (State_Array[iInputPin])
{
case LED_on:
{
#ifdef DEBUG
#ifdef LCD_OUTPUT
Utility_LCD_Process("case LED_on ");
#endif
#endif
State_Array[iInputPin] = LED_off;
break;
}
case LED_off:
{
#ifdef DEBUG
#ifdef LCD_OUTPUT
Utility_LCD_Process("case LED_off ");
#endif
#endif
State_Array[iInputPin] = TEST; // illegal case test
break;
}
case NOT_IMPLEMENTED:
{
#ifdef DEBUG
#ifdef LCD_OUTPUT
Utility_LCD_Process("case LED_off ");
#endif
#endif
State_Array[iInputPin] = TEST; // illegal case test
break;
}
default:
#ifdef DEBUG
#ifdef LCD_OUTPUT
Utility_LCD_Process("Invalid case ");
#endif
#endif
}
} while (true);
"></pre>
One more question - since the state is changed using interrupt it needs to be declared as volatile.It compiles with "volatile enum..." so it should be correct syntax, but I cannot test it as yet - the hardware is not ready, but I will try to emulate the interrupt.
AM I on the right track?
-- modified 22-Jun-15 21:55pm.
|
|
|
|
|
No you don't have to declare enum volatile, and the compiler should give you error. This because you are creating an enumerated series of constants and they can't change.
You must declare volatile your input variable, because this is the one that changes under interrupt. You must do this to advice the compiler to take care when optimizing code to make no assumptions on its value. If you don't there will be cases in which the compiler will assume that the variable is in a specific condition and will omit some code or use an old value.
|
|
|
|
|
Thanks,
the "problem" is that Arduino IDE "compiler" is so "automated" AKA stuff is hidden from user.
It did not complain when I tested this base code.
At present the input "pins" array is not volatile and I'll change that.
I have a small additional challenge implementing the ISR ( interrupt service routine).
The interrupt code involves callback and is defined as Arduino API.
The ISR does not accept any parameters , consequently each interrupt process implementation has to have its own ISR. So in theory the input ( variable ) is processed only in one place.
It is convoluted mess and I am working on it.
But I needed the basic state machine to work first.
SO far so good.
|
|
|
|
|
Somewhat unrelated to your original question but........
REMOVE ALL THOSE #defines!
That is some ugly code! If you need a #define on a print function, do it once with a macro and not every single place you want to use that function.
#ifdef DEBUG
#define UTIL_PROC(X) Utility_LCD_Process((X));
#else
#define UTIL_PROC(X)
#endif
case LED_on:
{
UTIL_PROC("case LED_on")
State_Array[iInputPin] = LED_off;
break;
}
|
|
|
|
|
Good suggestion, however I did moved the #ifdef / #endif to the function itself so it does <b>look</b> cleaner.
Actually - is the #else necessary?
|
|
|
|
|
If you're using my method, yes... otherwise the compiler will tell you that function is undefined.
|
|
|
|
|
I'd love to hear any feed back regarding design, style and architecture.
You can find the source at http://github.com/corvusoft/restbed.
Asynchronous RESTful framework
#include <memory>
#include <cstdlib>
#include <restbed>
using namespace std;
using namespace restbed;
void get_method_handler( const shared_ptr< Session >& session )
{
const auto request = session->get_request( );
size_t content_length = 0;
request->get_header( "Content-Length", content_length );
session->fetch( content_length, [ ]( const shared_ptr< Session >& session,
const Bytes& body )
{
fprintf( stdout, "%.*s\n", ( int ) body.size( ), body.data( ) );
session->close( OK, "Hello, World!", { { "Content-Length", "13" } } );
} );
}
int main( const int, const char** )
{
auto resource = make_shared< Resource >( );
resource->set_path( "/resource" );
resource->set_method_handler( "GET", get_method_handler );
auto settings = make_shared< Settings >( );
settings->set_port( 1984 );
settings->set_default_header( "Connection", "close" );
Service service;
service.publish( resource );
service.start( settings );
return EXIT_SUCCESS;
}
modified 21-Jun-15 10:42am.
|
|
|
|
|
However this is not the appropriate place to post it (please remove).
|
|
|
|
|
Apologies, I was some what confused as to which thread this post would belong. Do you have any suggestions?
|
|
|
|
|
Unfortunately, no (I am confused too).
By the way, I've balanced the downvote.
|
|
|
|
|