|
My objective is for the app to utilize ALL hardware processors / CPU.
From posts so far I feel I do not need multiple threads running in each CPU, one will do for now.
I am just trying to understand the relations between processors and threads.
The following code leads to believe that the threads are NOT related to # of processors.
int iCPU = 0;
iCPU = omp_get_num_procs();
cout << "# of CPU's " << dec << +iCPU << endl;
gets "4" which is number of CPU's
int iMAXThread = 0;
iMAXThread = omp_get_max_threads();
cout << "# of iMAXThread " << dec << +iMAXThread << endl;
also gets "4" - apparently default of "one thread to CPU ?"
// Now set the number of threads
omp_set_num_threads(10);
iMAXThread = omp_get_max_threads();
returns "10" per "system ?"
cout << "# of iMAXThread " << dec << +iMAXThread << endl;
exit(1);
|
|
|
|
|
I'd recommend you google for "openmp tutorial C++" and work through one or more of them. I don't know anything about OpenMP, except that its a MultiProcessing (i.e. threading) extension to C/C++. I do know that multi-threading is easy to get wrong. Added to which, having multiple threads means that you get out-of-order (to the human brain, anyway) execution, which can lead to unexpected results e.g. (the following code comes from one of the tutorials ... don't ask me anything further about it!)
<div class="op">Quote:</div>$ cat example.c
#include <omp.h>
#include <stdio.h>
int main()
{
#pragma omp parallel
{
int ID = omp_get_thread_num();
printf("hello(%d)", ID);
printf("world(%d)\n", ID);
}
return 0;
}
$ gcc -fopenmp example.c -o example
$ ./example
hello(1)world(1)
hello(0)world(0)
hello(3)world(3)
hello(2)world(2)
$ ./example
hello(3)world(3)
hello(0)world(0)
hello(1)world(1)
hello(2)world(2)
$ ./example
hello(3)world(3)
hello(2)hello(1)world(1)
hello(0)world(0)
world(2)
That's about as simple as a MP program gets, and you can see that
a) the threads didn't run in the same order every time, and
b) in the last run thread(2) was interrupted by thread(1) and thread(0) before it completed.
Proceed with caution!
modified 9-Feb-19 14:37pm.
|
|
|
|
|
I'd also like to point out that with threads, debugging becomes more challenging, too!
|
|
|
|
|
I really appreciate all contributions to this thread so far.
As anything "new" to me I do not have any specific / planned way to proceed with my learning , especially to cover some of the less obvious / less visible aspects.
I do however maintain that learning new technology is not always "from bottom up" , it is a mosaic / puzzle whose pieces by itself do not make much sense.
Let me restate my "objective" in using OpenMP - utilize ALL CPU's available in the application.
I have deliberately NOT specified WHAT the app does. My objective should be descriptive enough.
But to attempt to refine the "puzzle" , and for those curious, I do not foresee any future applications critical in sequence of how the app is executed.
I still believe that utilizing all hardware CPU's will eventually benefit the app, but honestly I am not interested in proving / disproving that in any specific way.
As far as OpenMP "tutorials" - it is very mature technology and most of the tutorials reflect that. And some of them are just repetitious in the cautioning the reader in "difficulties " of debugging multi threaded applications.
Cheers
Vaclav
|
|
|
|
|
OpenMP works out how many CPU core you have at runtime.
It isn't a fixed answer and there may be cores unavailable to you and there isn't a thing you can do about it.
This call returns the number of cores available to you at time of call
int omp_get_num_procs();
Inside Windows or Linux certain threads may have a core locked with core affinity and you can sit there and whistle dixie the OS is never going to give OpenMP that core.
OpenMP is not the OS and is not in control of core scheduling it asks and locks the maximum the O/S will allow.
So if you aren't getting all the cores you need to look at the O/S level.
google something like "Affinity control outside OpenMP"
In vino veritas
modified 9-Feb-19 22:59pm.
|
|
|
|
|
Thanks Leon.
Found this and my first impression is that simple "Hello world" MAY be better way to get more familiar with OpenMP. Just "adding OpenMP" to my existing code is not going to tell me if it is doing anything useful until I add some controls / measurements / trace.
That is contrary to my initial statement that I am not interested in analyzing OpenP processes, but to do that I need more experience with OpenMP.
Error[^]
|
|
|
|
|
I would suggest you need a better understand of threads, and some practical experience of using them. Without understanding the basics, I doubt that OpenMP is going to bring you any benefit.
|
|
|
|
|
No, threads and processors are not related in any way. Processors (CPUs) are hardware, threads are pieces of code that run in a CPU. An application can create as many threads as it likes regardless of how many processors exist in the system that it runs on. In most cases you do not need to know how many CPUs are available, since your application should be based on its business/technical design rather than the hardware it runs on. Finally, do not assume that you can control how many processors your application uses; the operating system is in control of resources, and allocates them as necessary.
|
|
|
|
|
Need c coding for the below scenario
Case 1: Input: string 1: -(a+b+c)
String 2: -a-b-c
Output: True
Case 2: Input: string 1: -(a-b)
String 2: -a-b
Output: False
|
|
|
|
|
Post what you have tried and try to ask specific question(s). You're not likely to find someone that will just write it for you.
"the debugger doesn't tell me anything because this code compiles just fine" - random QA comment
"Facebook is where you tell lies to your friends. Twitter is where you tell the truth to strangers." - chriselst
"I don't drink any more... then again, I don't drink any less." - Mike Mullikins uncle
|
|
|
|
|
Hint count the minus signs in your expanded string
In vino veritas
|
|
|
|
|
|
This is just curiosity , not a question.
I code in C++ , using IDE.
It "builds" the executable using "make" without me messing with "automatically generated" "make" by IDE. Accessing the "make " sometime warns that is it so generated by IDE and not to modify it.
Most of the time I follow instructions, so I let IDE to do its stuff.
I have noticed that some code posts here include manually building the "make".
Why if run of the mill IDE does it "automatically" ?
I am sure there are occasions when IDE generated make could use some tweaking, but in general IDE does the job well.
Off soap box.
Cheers
Vaclav
|
|
|
|
|
It is the same as the compiler. You can let the IDE set the options and generate the command, or you can run it yourself in a command window. Programs such as cl (the C/C++ compiler), make etc., have existed for years, long before the IDEs were developed. The IDEs just try to make life easier for you. I have a Visual Studio project that does not run the standard way so I had to create my own Makefile to build it. You are free to do that with any of your projects if you want greater control over how they are built.
|
|
|
|
|
Vaclav_ wrote: I have noticed that some code posts here include manually building the "make".
Why if run of the mill IDE does it "automatically" ?
Your IDE probably creates the Makefile from some sort of project database, perhaps a .proj file. Lets assume that I create a library that you would like to use, but I use a different IDE than you, which uses Project.DB files that are completely different than the .proj files your IDE uses. If my project creates a Makefile that can be used from the command line, then you don't have to figure out how to import and build the library using your IDE, or how to install and use the IDE I use. Additionally, your IDE might clobber the Makefile distributed with my project, in which case you're going to have to go try to make sure you've got all the dependencies right, which might take a lot of time.
In the land of UNIX like OS (Linux, BSD, Darwin, etc), make is almost ubiquitous. If the OS doesn't ship with make, someone will probably have a version of make (e.g. gnu-make) available. That means make is, more or less, the de facto build tool.
|
|
|
|
|
The answer to that is historic most of the original C compilers were around with DOS and Unix all we had was a command line. So on the command line you used to have to type these extremely long strings and so everyone wrote DOS batch files or unix shell scripts. Come 1976 and make was created by Stuart Feldman and it was released to public domain.
Make never really took off with DOS users as by 1983 Windows 1.0 was released and a small company called Borland began with Pascal and later C as a text GUI on the windows platform.
Borland - Wikipedia[^]
Microsoft saw the success of Borland as a threat (they used DOS command line in house) and essentially began to mimic Borland at least on the GUI (the compiler remained a dos executable) and thus was born the first microsoft compiler tools.
With no big bucks for GUI compilers being spent on linux make became a staple thing for the linux OS. Even most embedded compilers tended to stay as DOS or Windows with a couple of exceptions. So generally you could pick Windows versus Linux developers by did they know there way around make which developed it's own language.
More recently GUI compiler IDE's started to pop up in linux and so they will import and work with Make files. Windows Visual Studio now can even work with make files as it is targetting linux developers although in all honesty they would like them to move to CMake.
There is also a very subtle difference between windows and linux repositories because of the history that many linux programmers fail to notice. Most windows code has the header files for the C code files in the same directory, on most linux repositories you will find all the headers in an include directory (they seem to view it is good practice) and the source files elsewhere. The reason for the difference is Windows compilers have a specific search order which is same directory as C file, library directory and then user defined include directories. Under windows compiling #include <some.h> is very different to #include "some.h" (the first means the C system directory the second means the user defined directory) on linux repositories that difference always seems to get lost (all the code is pubic domain none is company copyrighted) they randomly they use either -l or -L and fix it in the makefile. Under Windows you start messing around with the system directories you will get in a world of hurt (as much is precompiled copyrighted) and usually only SDK's do that.
In vino veritas
modified 7-Feb-19 23:34pm.
|
|
|
|
|
leon de boer wrote: all the code is pubic domain I now have a most strange image in my mind ...
|
|
|
|
|
Quote: I have noticed that some code posts here include manually building the "make". Why if run of the mill IDE does it "automatically" ?
Possibly because I am not using an IDE .
|
|
|
|
|
|
baid is a IDE , bash is a shell .
|
|
|
|
|
I'm working on a music notation program, where it needs to be possible to select any object, like a note or a clef, by clicking on the glyph. In order to determine whether a glyph has been clicked, I'm trying the approach of iterating through the drawable objects, rendering them each into a memory device context with a blank bitmap selected, mapping the mouse coordinates into this bitmap, and checking whether the pixel they map to is filled.
While I'm getting this system working, I'm rendering a rectangular frame with the dimensions of the memory device context's bitmap to the corner of the window, and rendering the bitmap itself into it using BitBlt in order to see what's going on. What I see surprises me. I thought that rendering a glyph into the memory device context using TextOutW with x = 0 and y = 0 would put the glyph's origin at the top left of the bitmap, like it does when rendering to a window in the most basic case. Instead, the glyph appears to the left but vertically centered in the frame. So...given that the bitmap handle I selected into the memory device context was created using CreateCompatibleBitmap with cx = w and cy = h, does TextOutW treat it as having a coordinate system where x ranges from 0 to w while y ranges from -h/2 to h/2? Is there somewhere on MSDN that discusses this?
|
|
|
|
|
I might not be getting the full scope of your question but since you are drawing the glyphs
into I assume 'known' rectangle area, could you not use CRect::PtInRect() to test if you hit a glyph area?
|
|
|
|
|
I want the clickable area to be exactly the pixels that are part of the glyph, rather than anywhere in a bounding rectangle that contains the glyph. There are situations where music symbols nest together, or where parts of one are visible through the gaps in another. Using single rectangles for each would count one as obscuring parts of the other even though those parts are visible - any part that's visible should be clickable. I could define a set of bounding rectangles for each glyph in order to approximate them more tightly, but that's a lot of work I'd rather not do unless it turns out that the method I describe is too slow.
|
|
|
|
|
You are dealing with TTF glyphs not a bitmap and you will need to understand how FUnits and the EMSquare work
TrueType fundamentals - Typography | Microsoft Docs[^]
The glyph is rendered to screen it never is a bitmap.
Now what you could do if font style isn't important is select a bitmap font and it will work much as you think.
Now if you want to do what you are doing with a TTF the function you want is called
GetGlyphOutlineA function | Microsoft Docs[^]
The DC will be your window you are drawing the text on so it matches one to one.
Now the problem is you will have an outline with beziers and lines and to be able to decide if you are in or out you need to scanline the contour at the y value and see if the point is between any two runline points of the scanline. Now if you follow all that this is the two scanline functions you need
short ScanlineLine2D(long x1, long y1,
long x2, long y2,
long sy,
long* px1){
long xt;
double t;
if (y2 == y1) return (0);
if (((sy >= y1) && (sy <= y2)) || ((sy >= y2) && (sy <= y1))){
t = ((double)(sy-y1)-FLT_EPSILON)/(double)(y2-y1);
if ( (t >= 0.0) && (t <= 1.0)) {
xt = (long)((double)(x2-x1)*t) + x1;
if (px1) (*px1) = xt;
return (1);
} else return (0);
} else return (0);
}
short ScanlineQuadBezier2D(long x1, long y1,
long x2, long y2,
long x3, long y3,
long sy,
long* px1, long* px2){
short rslt;
long dx0, dy0, dx1, dy1, dx2, dy2, xf;
long* px;
double tmf, tdf, tf, ymf;
dx0 = x2 - x1;
dy0 = y2 - y1;
dx1 = x3 - x2;
dy1 = y3 - y2;
if ((dx0 == 0) && (dy0 == 0) && (dx1 == 0) && (dy1 == 0))
return (0);
if ((dx0 == 0) && (dy0 == 0))
return (ScanlineLine2D(x2, y2, x3, y3, sy, px1));
if((dx1 == 0) && (dy1 == 0))
return (ScanlineLine2D(x1, y1, x2, y2, sy, px1));
dx2 = (dx1-dx0);
dy2 = (dy1-dy0);
if (dy2 != 0){
tmf = -(double)dy0/(double)dy2;
ymf = (double)y1 + FLT_EPSILON + tmf*(double)dy0;
tdf = ((double)sy - ymf)/(double)dy2;
if (tdf < 0.0) {
return (0);
} else if (tdf != 0.0) tdf = sqrt(tdf);
rslt = 0;
px = px1;
tf = tmf - tdf;
if ((tf >= 0.0) && (tf <= 1.0)){
xf = (long)(((double)dx2*tf + (double)(dx0*2))*tf) + x1;
(*px) = xf;
rslt++;
px = px2;
};
tf = tmf + tdf;
if ((tf >= 0.0) && (tf <= 1.0)){
xf = (long)(((double)dx2*tf + (double)(dx0*2))*tf) + x1;
(*px) = xf;
rslt++;
px = px2;
};
return (rslt);
} else if (dy0 != 0) {
tf = ((double)(sy - y1) - FLT_EPSILON)/(double)(dy0*2);
if ((tf >= 0.0) && (tf <= 1.0)){
xf = (long)(((double)dx2*tf + (double)(dx0*2))*tf) + x1;
(*px1) = xf;
return (1);
} else return (0);
} else return (0);
}
In vino veritas
|
|
|
|
|
I was actually able to make what I originally had in mind work - I know the glyphs aren't *defined* as bitmaps, but rendering a glyph into a device context with a bitmap selected effectively makes it into it into a bitmap, and that's what I did - but analyzing the glyph outlines directly as you suggest is surely more efficient, so I'll probably try it too.
|
|
|
|
|