Click here to Skip to main content
65,938 articles
CodeProject is changing. Read more.
Articles
(untagged)

Single Thread vs Dual Thread

0.00/5 (No votes)
9 Oct 2004 2  
An introduction to one of the optimization methods for Intel Dual Xeon HT technology.

Sample Image - PI/PI.jpg

Contents

Objective

My objective of posting this article is to share some simple optimisation method. That's it!

Introduction

This article is demonstrating how the dual thread would perform better than single thread especially in Intel dual Xeon with HT technology. The demo that I post here is a simple PI calculation as an example of computational expensive function. Both PI functions will run at 100% CPU utilization.

Requirement

Very simple. In order to see significant improvement, you need to practice it on dual-CPU machine.

Code

The demo code is very simple. Here's the sample starts..

SingleThreadPI and DualThreadPI will do exactly the same thing, same result, but different timing. The main difference is that the DualThreadPI function is written by using Data Parallelization method.

#include "stdafx.h"


#include <windows.h>

#include <stdio.h>

#include <stdlib.h>

#include <conio.h>

#include <string.h>

#include <time.h> 


#define NUM_THREADS 2 
 
HANDLE thread_handles[NUM_THREADS];
CRITICAL_SECTION hUpdateMutex;
static long num_steps = 100000 * 1000;
double step;
double global_sum = 0.0; 
 
int SingleThreadPI();
int DualThreadPI();
void ThreadPI(void *arg); 
 
int main(int argc, char* argv[])
{
  // Single Thread

  SingleThreadPI(); 
 
  // Dual Thread

  DualThreadPI(); 
 
  getchar(); 
  return 0;
} 
 
int SingleThreadPI()
{
  int start, end, total;
  start = clock(); 
  int i;
  double x, pi, sum = 0.0;
  step = 1.0/(double) num_steps;
 
  for (i=1;i<= num_steps; i++)
  {
    x = (i-0.5)*step;
    sum = sum + 4.0/(1.0+x*x);
  } 
 
  pi = step * sum;
  printf("\nSingleThreadPI\n");
  printf(" pi is %f \n",pi); 
  end = clock();
  total = end - start;
  printf("%d\n", total); 
 
  return 0;
} 
 
int DualThreadPI()
{
  int start, end, total;
  start = clock(); 
  double pi;
  int i;
  unsigned long threadID;
  int threadArg[NUM_THREADS]; 
 
  for(i=0; i<NUM_THREADS; i++)
    threadArg[i] = i+1; 
 
  InitializeCriticalSection(&hUpdateMutex); 
 
  for (i=0; i<NUM_THREADS; i++)
  {
    thread_handles[i] = CreateThread(0, 0, 
        (LPTHREAD_START_ROUTINE) ThreadPI, &threadArg[i], 0, &threadID);
  } 
 
  // wait until both threads end its processing

  WaitForMultipleObjects(NUM_THREADS, thread_handles, TRUE,INFINITE); 
 
  pi = global_sum * step;
  printf("\nDualThreadPI\n");
  printf(" pi is %f \n",pi); 
  end = clock();
  total = end - start;
  printf("%d\n", total); 
 
  return 0;
} 
 
void ThreadPI(void *arg)
{
  int i, start;
  double x, sum = 0.0; 
  start = *(int *) arg; // arg is actually the thread ID, i in the loop.

  step = 1.0/(double) num_steps; 
 
  // NUM_THREADS in this case is 2

  // so that thread ID=1, ThreadPI will compute odd number in the loop

  //         thread ID=2, ThreadPI will compute even number in the loop

  for (i=start;i<= num_steps; i=i+NUM_THREADS)
  {
    x = (i-0.5)*step;
    sum = sum + 4.0/(1.0+x*x);
  } 
 
  // try to make atomic statement, otherwise...

  EnterCriticalSection(&hUpdateMutex);
  global_sum += sum;
  LeaveCriticalSection(&hUpdateMutex);
}

Test Performance

Based on profiling, this type of optimisation, as expected, running on uni-processor machine will not have significant improvement. Unless run on dual-processor machine, definitely you will see big difference between single thread and dual thread. here's the live sample...

Machine -> Pentium 4 (2.4GHz, 1GB Memory)

Profile: Function timing, sorted by time
Date:    Sun Oct 10 02:23:09 2004 

Program Statistics
------------------
    Command line at 2004 Oct 10 02:23: "G:\j2\net\code project\article - 
         Optimise For Intel HT Technology\InLocal\PI\Release\PI" 
    Total time: 5342.205 millisecond
    Time outside of functions: 38.445 millisecond
    Call depth: 2
    Total functions: 8
    Total hits: 3
    Function coverage: 37.5%
    Overhead Calculated 4
    Overhead Average 4 
Module Statistics for pi.exe
----------------------------
    Time in module: 5303.760 millisecond
    Percent of time in module: 100.0%
    Functions in module: 8
    Hits in module: 3
    Module function coverage: 37.5% 
        Func          Func+Child           Hit
        Time   %         Time      %      Count  Function
---------------------------------------------------------
    2246.857  42.4     5303.760 100.0        1 _main (pi.obj)
    1620.240  30.5     1620.240  30.5        1 SingleThreadPI(void) (pi.obj)
    1436.662  27.1     1436.662  27.1        1 DualThreadPI(void) (pi.obj) 


Contributed by Nigel
Machine -> Dell Precision 530, Pentium Xeon (2 x 2.0GHz, 512MB Memory)

Profile: Function timing, sorted by time
Date: Sat Oct 09 16:58:43 2004


Program Statistics
------------------
Command line at 2004 Oct 09 16:58: "D:\somewhere\PI_demo\PI\Debug\PI" 
Total time: 10076.987 millisecond
Time outside of functions: 11.837 millisecond
Call depth: 2
Total functions: 7
Total hits: 3
Function coverage: 42.9%
Overhead Calculated 976
Overhead Average 976

Module Statistics for pi.exe
----------------------------
Time in module: 10065.150 millisecond
Percent of time in module: 100.0%
Functions in module: 7
Hits in module: 3
Module function coverage: 42.9%

Func Func+Child Hit
Time % Time % Count Function
---------------------------------------------------------
7175.142 71.3 10065.150 100.0 1 _main (pi.obj)
1931.827 19.2 1931.827 19.2 1 SingleThreadPI(void) (pi.obj)
958.182 9.5 958.182 9.5 1 DualThreadPI(void) (pi.obj)

Machine -> Pentium Xeon (2 x 2.8GHz, 512MB Memory)

(Pending...ToBeContinued...)
This one will have significant improvement. Anyone could volunteer? 

Finally

Actually I've seen its performance in my office already (dual Xeon 2.8GHz, it doubles the performance), just that now I am at home, I only can provide my pc's performance (has a bit improvement), the rest will let yourself to prove it... *wink*

Learning is fun =)

Recommendation/Tips

After you have read the article, probably you may want to try it out in your application. Assume you have put in lots of effort, after implement, as the result, you don't get the result you expected. Please don't jump at me. You might target wrongly in your application. Also, may be, you've tested your application on single-CPU machine, not much improvement you will get. See Requirement. This article will only show some light in the tunnel for you, it won't get you far. Here are some tips for you. To take advantage of this type of optimisation, my practice is to do profiling of my application, then target for frequently used function and especially computational expensive like compression, image filtering process, and other morphology process.. etc.

In large scale server application, its hardware probably may have at least 2-4 CPUs per machine, but I have never tried it before. Most likely I will not try it in near future as well, because I am not into that field. But you may want to try it! If the result is good, please don't mind to share the experience over here.. Good luck!

Link

Intel Web Site

License

This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below.

A list of licenses authors might use can be found here