Some so small like DateTime.Now
can be a bottleneck. On a typical Windows system, the Environment.TickCount
is at least 100 times faster. You don't believe it? Try it yourself! Here is the test code:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
namespace TimerPerformance
{
using System.Diagnostics;
class Program
{
static void Main(string[] args)
{
Console.WriteLine("Performance Tests");
Console.WriteLine(" Stopwatch Resolution (nS): " +
(1000000000.0 / Stopwatch.Frequency).ToString());
RunTests();
Console.WriteLine("Tests Finished, press any key to stop...");
Console.ReadKey();
}
public static long DummyValue;
public static void RunTests()
{
const int loopEnd = 1000000;
Stopwatch watch = new Stopwatch();
Console.WriteLine();
Console.WriteLine("Reference Loop (NOP) Iterations: " + loopEnd);
watch.Reset();
watch.Start();
for (int i = 0; i < loopEnd; ++i)
{
DummyValue += i;
}
watch.Stop();
Console.WriteLine(" Reference Loop (NOP) Elapsed Time (ms): " +
((double)watch.ElapsedTicks / Stopwatch.Frequency * 1000).ToString());
Console.WriteLine();
Console.WriteLine("Query Environment.TickCount");
watch.Reset();
watch.Start();
for (int i = 0; i < loopEnd; ++i)
{
DummyValue += Environment.TickCount;
}
watch.Stop();
Console.WriteLine(" Query Environment.TickCount Elapsed Time (ms): " +
((double)watch.ElapsedTicks / Stopwatch.Frequency * 1000).ToString());
Console.WriteLine();
Console.WriteLine("Query DateTime.Now.Ticks");
watch.Reset();
watch.Start();
for (int i = 0; i < loopEnd; ++i)
{
DummyValue += DateTime.Now.Ticks;
}
watch.Stop();
Console.WriteLine(" Query DateTime.Now.Ticks Elapsed Time (ms): " +
((double)watch.ElapsedTicks / Stopwatch.Frequency * 1000).ToString());
Console.WriteLine();
Console.WriteLine("Query Stopwatch.ElapsedTicks");
watch.Reset();
watch.Start();
for (int i = 0; i < loopEnd; ++i)
{
DummyValue += watch.ElapsedTicks;
}
watch.Stop();
Console.WriteLine(" Query Stopwatch.ElapsedTicks Elapsed Time (ms): " +
((double)watch.ElapsedTicks / Stopwatch.Frequency * 1000).ToString());
}
}
}
Here we have the Results for some machines (1.000.000 Iterations, in milliseconds):
Hardware |
Empty Loop |
Environment.TickCount |
DateTime.Now.Ticks |
AMD Opteron 4174 HE 2.3 GHz |
8.7 ms |
16.6 ms |
2227 ms |
AMD Athlon 64 X2 5600+ 2.9 GHz |
6.8 ms |
15.1 ms |
1265 ms |
Intel Core 2 Quad Q9550 2.83 GHz |
2.1 ms |
4.9 ms |
557.8 ms |
Azure A1 (Intel Xeon E5-2660 2.2 GHz) |
5.2 ms |
19.9 ms |
168.1 ms |
Ok, the single request will only take about 1-2 microseconds for the DateTime.Now
call. This is a maximum throughput of 500.000 to 1.000.000 calls per second. In contrast, the Environment.TickCount
has a maximum throughput of about 600.000.000 calls per second. If a particular operation needs 10 timestamps, it has a maximum throughput of 50.000 operations only because of DateTime.Now.
For example, an HTTP request that measures response time and throughput (data transfer rate) needs a timestamp for every chunk of data it receives from the web server. Until the operation completes, there are at least 3 timestamps (begin, response, end) to measure response time and download time. If the throughput (data transfer rate) is measured, it all depends on how many chunks are received. This is even bad for multi-threaded access. Both, Environment.Tick Count
and DateTime.Now
are shared resources. All calls must go to the synchronization mechanism of them, which means they are not parallelized.
Real systems like the Crawler-Lib Engine can perform 20.000 - 30.000 HTTP requests per second on relatively good hardware. So it is obvious that the time measurement causes an impact on the maximum throughput.
Some will argue, that DateTime.Now
is much more precise than Environment.TickCount
. This is partially true. Here, we have a code snippet that measures the granularity of timestamps:
if( Environment.TickCount > int.MaxValue - 60000)
throw new InvalidOperationException("Tick Count will overflow in the next minute, test can't be run");
var startTickCount = Environment.TickCount;
var currentTickCount = startTickCount;
int minGranularity = int.MaxValue;
int maxGranularity = 0;
while (currentTickCount < startTickCount + 1000)
{
var tempMeasure = Environment.TickCount;
if (tempMeasure - currentTickCount > 0)
{
minGranularity = Math.Min(minGranularity, tempMeasure - currentTickCount);
maxGranularity = Math.Max(maxGranularity, tempMeasure - currentTickCount);
}
currentTickCount = tempMeasure;
Thread.Sleep(0);
}
Console.WriteLine("Environment.TickCount Min Granularity: " + minGranularity + ",
Max Granularity: " + maxGranularity + " ms");
Console.WriteLine();
var startTime = DateTime.Now;
var currentTime = startTime;
double minGranularityTime = double.MaxValue;
double maxGranularityTime = 0.0;
while (currentTime < startTime + new TimeSpan(0, 0, 1))
{
var tempMeasure = DateTime.Now;
if ((tempMeasure - currentTime).TotalMilliseconds > 0)
{
minGranularityTime = Math.Min(minGranularityTime,
(tempMeasure - currentTime).TotalMilliseconds);
maxGranularityTime = Math.Max(maxGranularityTime,
(tempMeasure - currentTime).TotalMilliseconds);
}
currentTime = tempMeasure;
Thread.Sleep(0);
}
Console.WriteLine("DateTime Min Granularity: " + minGranularityTime + ",
Max Granularity: " + maxGranularityTime + " ms");
Running this on several machines shows that Environment.TickCount
has a granularity of about 16ms (15.6 ms) which is the default system wide timer resolution. The system wide timer resolution can be changed with the timeBeginPeriod
function down to 1ms, but this is not generally recommended because it affects all applications. DateTime.Now
has on some machines the granularity of 16ms, on other machines a better granularity down to 1ms. But it is never much better than 1ms. If you need to measure smaller times, you have to use the System.Diagnostics.Stopwatch
class which is in fact a high resolution timer.
As a consequence, the Crawler-Lib Framework usesEnvironment.TickCount
for timestamps that are needed to measure durations for responses or tasks or whatever. Soon, we release the Crawler-Lib Core library for free, which contains a TickTimestamp
class that can be used for duration and throughput computations.