This article is Part 4 of a 4 part series of a tutorial that aims to give a brief and advanced introduction into programming with C#.
These articles represent lecture notes, which have been given in the form of tutorials originally available on Tech.Pro.
Table of Contents
- Introduction
- More Control on Events
- Overloading Operators
- The Yield Statement
- Iterators
- Understanding Co- and Contra-Variance
- Using Attributes Effectively
- Elegant Binding
- Unsafe Code
- Communication Between Native and Managed Code
- Effective C#
- Outlook
- Other Articles in This Series
- References
- History
This tutorial aims to give a brief and advanced introduction into programming with C#. The prerequisites for understanding this tutorial are a working knowledge of programming, the C programming language and a little bit of basic mathematics. Some basic knowledge of C++ or Java could be helpful, but should not be required.
Introduction
This is the fourth (and most probably last) part of a series of tutorials on C#. In this part, we will look at some more advanced techniques and features of the language. We will see that we can actually control what the compiler does when event handlers are added or removed, how we can bring in our own operator definitions or get information about member names. We will also see that building iterators is in fact quite easy with C#.
Additionally, we will strengthen our knowledge about how to write effective C#, the Garbage Collector and attributes. Finally, we are also discussing how to talk with native programs and libraries using C# via unsafe
contexts and native interoperability. This tutorial should provide some useful tips and tricks for experienced C# developers.
For further reading, a list of references will be given in the end. The references provide a deeper look at some of the topics discussed in this tutorial.
More Control on Events
In the previous tutorial, we discussed the .NET standard event pattern and the event
keyword. We have noticed that in principle, the event
keyword protects a simple public delegate
instance for being accessed directly. Instead, we just have the possibility to add (+=
) or remove (-=
) handlers from outside.
It has already been explained that the reason for this is that the compiler extends our class definition with two more methods, which will be called when adding or removing an event handler. Those methods then perform operations in a thread-safe manner, like calling the static Combine
method of the Delegate
class.
C# also allows us to define what these methods do. This can be of great benefit, since this allows us to create weak event handlers and other more sophisticated patterns. This also allows us to control which handlers can be registered (or unregistered) for our event(s). The basic syntax is quite close to properties with their get
and set
blocks. Let's see this in action:
class MyClass
{
public event EventHandler EventFired
{
add
{
Console.WriteLine("Tried to register a handler.");
}
remove
{
Console.WriteLine("Tried to unregister a handler.");
}
}
}
We could now either use another (private
and not marked as an event
) instance of the delegate
(in this case EventHandler
) or we could do some other (perhaps more special) action. One thing we could do is to have a single handler instead of multiple handlers:
class MyClass
{
EventHandler singleHandler;
public event EventHandler EventFired
{
add
{
singleHandler = value;
}
remove
{
if (singleHandler == value)
singleHandler = null;
}
}
}
However, this pattern is just useful in some exclusive scenarios as well. So let's go on and have a look at the probably most common usage of writing our own add
and remove
blocks.
Hashtable events = new Hashtable();
public event EventHandler Completed
{
add
{
events["Completed"] = (EventHandler)events["Completed"] + value;
}
remove
{
events["Completed"] = (EventHandler)events["Completed"] - value;
}
}
Here, we use the plus and minus operators, which are overloaded by delegate to call Combine
or Remove
.
Overloading Operators
What we did not discuss in this series of tutorials yet, is the possibility of implementing operators for our own objects. C# allows us to overload implicit cast operators, explicit cast operators, and a whole range of arithmetic / comparison operators (+
, -
, !
, ~
, ++
, --
, true
, false
, +
, -
, *
, /
, %
, &
, |
, ^
, <<
, >>
, ==
, !=
, <
, >
, <=
, >=
). Other operators either cannot be overloaded (like the assignment operator) or are implicitly overloaded (like +=
is overloaded if we overload +
).
It is not possible to overload the index operator []
, but we can create our own indexers:
class MyClass
{
public int this[int index]
{
get { return index + 1; }
}
}
var myc = new MyClass();
int value = myc[3];
Indexers are like properties, however, the differences are that they have a special name (this
) and require one or more parameters to be specified (in case of the example, we define an integer parameter with name index
). This way, we can easily create our own multi-dimensional indexers:
class Matrix
{
double[][] elements;
public double this[int row, int column]
{
get
{
if (row < 0 || column < 0)
throw new IndexOutOfRangeException();
else if (elements.Length >= row)
return 0.0;
else if (elements[row].Length >= column)
return 0.0;
return elements[row][column];
}
set
{
if (row < 0 || column < 0 || elements.Length >= row ||
elements[row].Length >= column)
throw new IndexOutOfRangeException();
elements[row][column] = value;
}
}
}
We can have an arbitrary amount of such indexers, as long as each one has a unique signature (different number or types of parameters).
To demonstrate how operator overloading works with the previously named operators, we will work with an example that creates a simple structure called Complex
(this struct
will represent a complex number in double precision):
public struct Complex
{
double re;
double im;
public Complex(double real, double imaginary)
{
re = real;
im = imaginary;
}
public double Re
{
get { return re; }
set { re = value; }
}
public double Im
{
get { return im; }
set { im = value; }
}
}
The first thing we might want to implement is an implicit conversion from double
to Complex
. A double
is like a real value, i.e., this conversion should give us a complex value that has imaginary part 0:
public static implicit operator Complex(double real)
{
return new Complex(real, 0.0);
}
We will see that all operator definitions require the operator
keyword. For overloading implicit conversions, the implicit
keyword is additionally required. Needless to say that every operator overload has to be static
.
There should be also an explicit conversion from Complex
to double
. This explicit conversion is then used to use the absolute value of the complex number:
public static explicit operator double(Complex c)
{
return Math.Sqrt(c.re * c.re + c.im * c.im);
}
Maybe we want to use our complex object for defining true
(to be used in if (instance) ...
) or false
(which will be used in combination with overriding &). Let's see how this could be implemented:
public static bool operator true(Complex c)
{
return Math.Abs(c.re) > double.Epsilon || Math.Abs(c.im) > double.Epsilon;
}
public static bool operator false(Complex c)
{
return Math.Abs(c.re) <= double.Epsilon && Math.Abs(c.im) <= double.Epsilon;
}
On the other side, if we implement representations for true
or false
, we should also implement an (at least explicit) cast to bool
:
public static explicit operator bool(Complex c)
{
return new Boolean(Math.Abs(c.re) > double.Epsilon || Math.Abs(c.im) > double.Epsilon);
}
Finally, we should think about implementing the (required) arithmetic operators:
public static Complex operator +(Complex c1, Complex c2)
{
return new Complex(c1.re + c2.re, c1.im + c2.im);
}
public static Complex operator -(Complex c1, Complex c2)
{
return new Complex(c1.re - c2.re, c1.im - c2.im);
}
public static Complex operator *(Complex c1, Complex c2)
{
return new Complex(c1.re * c2.re - c1.im * c2.im, c1.re * c2.im + c1.im * c2.re);
}
public static Complex operator /(Complex c1, Complex c2)
{
double nrm = Math.Sqrt(c2.re * c2.re + c2.im * c2.im);
return new Complex((c1.re * c2.re + c1.im * c2.im) / nrm,
(c2.re * c1.im - c2.im * c1.re) / nrm);
}
And finally, we also might want to implement some comparison operators. This follows the same pattern as above, however, as return type we now specify a Boolean
to be used:
public static bool operator ==(Complex c1, Complex c2)
{
return Math.Abs(c1.re - c2.re) <= double.Epsilon &&
Math.Abs(c1.im - c2.im) <= double.Epsilon;
}
public static bool operator !=(Complex c1, Complex c2)
{
return !(c1 == c2);
}
If we implement these comparison operators, the C# compiler encourages us (with a warning) to also override the methods GetHashCode
and Equals
:
public override int GetHashCode()
{
return base.GetHashCode();
}
public override bool Equals(object o)
{
if (o is Complex)
return this == (Complex)o;
return false;
}
One question that could arise at this point is: What about operators involving, e.g., double
and Complex
? Don't we need something like the following as well?
public static Complex operator +(Complex c, Double x)
{
return new Complex(c.re + x, c.im);
}
The specific answer is: yes and no. In general, we might need something like this, but then we would also need the following operator overload to be defined:
public static Complex operator +(Double x, Complex c)
{
return new Complex(c.re + x, c.im);
}
However, in our case, we specified an implicit (!) conversion from Double
to Complex
. This means that if a Complex
type would be needed where an object of type Double
is given, a conversion is performed automatically (the compiler inserts the required instructions). Of course, sometimes it could be beneficial (performance-wise or logical-wise) to include such overloads of an operator explicitly.
Finally, we can use our own type as follows:
Complex c1 = 2.0;
Complex c2 = new Complex(0.0, 4.0);
Complex c3 = c1 * c2;
Complex c4 = c1 + c2;
Complex c5 = c1 / c2;
double x = (double)c5;
So what should be the key lessons in this section?
- C# allows us to overload a broad range of operators, however, not as many (and not such critical ones like the assignment operator) as C++.
- Overloading (standard) operators requires us to implement
static
methods, which carry the operator
keyword and are declared public
. - We cannot overload the index operator, but create as many indexers as we like to. The indexers are differentiated by their signatures.
- Explicit conversions are performed like
(double)c5
, implicit conversions are automatically triggered by the compiler. However, also implicit conversions can be used explicitly, like (Complex)2.0
.
The Yield Statement
In C# it is quite common to use the foreach
loop, even though there are some significant drawbacks compared to the classical for
loop. One big drawback is the immutability of the loop iterator, i.e., we cannot change the variable of the loop. So the following code is not possible:
var integers = new [] { 1, 2, 3, 4, 5, 6 };
foreach(var integer in integers)
integer = integer * integer;
This is because the foreach
loop works only with elements that implement the IEnumerable<T>
interface, like every array or collections naturally does. Once some class implements this interface, the method GetEnumerator
can be called. This returns a special kind of class, which implements the IEnumerator<T>
interface. Here, we have an object that has a property called Current
and the methods MoveNext
and Reset
. This property is read-only.
The first thing to look right now is to implement the IEnumerable<T>
interface in our own class. The following example should demonstrate this:
class MyClass : IEnumerable<int>
{
public IEnumerator<int> GetEnumerator()
{
return null;
}
IEnumerator IEnumerable.GetEnumerator()
{
return null;
}
}
We could already use this in a foreach
loop. The compiler would let this pass, however, at runtime we would face serious problems, since there is no check for null
implied in the generated instructions. Now we have two possibilities:
- Create a class that implements
IEnumerator<T>
(with T
being int
in this case). - Use a nice C# language feature.
Here we will actually do both, however, only to see why the second way is much nicer (shorter) than the first. Let's have a look at the code that is required for the first approach.
class MyClass : IEnumerable<int>
{
public IEnumerator<int> GetEnumerator()
{
return new Squares();
}
IEnumerator IEnumerable.GetEnumerator()
{
return new Squares();
}
class Squares : IEnumerator<int>
{
private int number;
private int current;
public int Current
{
get { return current; }
}
object IEnumerator.Current
{
get { return current; }
}
public bool MoveNext()
{
number++;
current = number * number;
return true;
}
public void Reset()
{
number = 0;
current = 0;
}
public void Dispose()
{ }
}
}
Puuuh! Now this is quite long and required us to write a whole class! Before we go into details of the second way, we have a short look at using this code:
var squares = new MyClass();
foreach (var square in squares)
{
Console.WriteLine(square);
if (square == 100)
break;
}
Now that we are quite excited about a much shorter way, let's have a look at it, which involves the (not yet discussed) yield
keyword.
class MyClass : IEnumerable<int>
{
public IEnumerator<int> GetEnumerator()
{
for (int i = 1; ; i++)
yield return i * i;
}
IEnumerator IEnumerable.GetEnumerator()
{
return GetEnumerator();
}
}
Now this is quite a reduction in lines of code! Of course, this example will usually end up in an infinite loop. Maybe this is not wanted. The key question is now - can we break the loop at a specific point? There are multiple ways of doing this. One way would be to just limit the for
loop in the example like this:
public IEnumerator<int> GetEnumerator()
{
for (int i = 1; i < 10; i++)
yield return i * i;
}
However, sometimes code is a little bit longer and quite complicated. In such scenarios, we would like to return something, which stops the iteration. Again the yield
keyword is key, however, this time in combination with the break
keyword:
public IEnumerator<int> GetEnumerator()
{
for (int i = 1; ; i++)
{
var square = i * i;
if (square > 100)
yield break;
yield return square;
}
}
This lets us use the code like in the following snippet:
var squares = new MyClass();
foreach(var square in squares)
Console.WriteLine(square);
Finally, we have a nice way to create iterators in C#! But what are key benefits of using iterators and what has to be known before using them?
Iterators
An iterator is an object that enables us to traverse a container. We've just seen how easily we can create an iterator. The previous example should have also demonstrated us, that traversing a container could be something like traversing a list of items in a collection (or an array), but could be also much different. In the previous example, our container did not contain any elements, but was able to generate elements that have been subject for an iterator.
The yield
keyword and the iterator in C# have been inspired by the programming language CLU.
The most important characteristic if any object definition is that it implements the IEnumerable
(or even better: the IEnumerable<T>
) interface. This requires a method GetEnumerator
to be implemented and enables performing foreach
loops. There are even other benefits: Since (P)LINQ is built upon the foreach
loop (all entry extension methods target IEnumerable<T>
), we can write queries agains such iterators.
What is this section about? As we've seen in the last section, it is quite easy to built our own iterator in C#. We have also seen that foreach
uses this iterator concept to traverse the given container. However, even without using foreach
, we can benefit from this concept.
Let's consider the following lines of code as an example:
static void Main()
{
var myc = new MyClass();
var it = myc.GetEnumerator();
Examine(it);
}
static void Examine(IEnumerator<int> iterator)
{
while (iterator.MoveNext())
{
if ((iterator.Current & 1) == 0)
Even(iterator);
else
Odd(iterator);
}
Console.WriteLine("No more elements left!");
}
static void Even(IEnumerator<int> iterator)
{
Console.WriteLine("The element is even ...");
}
void Odd(IEnumerator<int> iterator)
{
Console.WriteLine("The element is odd ...");
}
Of course, this simple code could also be written using the foreach
loop, however, the example should show that we can actually build up a whole collection of methods, which in the end, just takes an iterator and works with it. Therefore, this code could be extended to the following version:
void Examine(IEnumerator<int> iterator)
{
List<int> entries = new List<int>();
while (iterator.MoveNext())
{
entries.Add(iterator.Current * iterator.Current);
if ((iterator.Current & 1) == 0)
Even(iterator);
else
Odd(iterator);
}
var next = entries.GetEnumerator();
Console.WriteLine("No more elements left!");
if (entries[entries.Count - 1] < 1000000000)
Examine(next);
}
Here, we are using our method again - this time with an iterator from a list that has been generated from the squares of the given elements. Calling Examine
again will result in numbers to the power of 4, then to the power of 8 and so on. This means that we can expect some output like the following:
The element 1 is odd ...
The element 4 is even ...
The element 100 is even ...
No more elements left!
The element 1 is odd ...
The element 16 is even ...
The element 10000 is even ...
No more elements left!
The element 1 is odd ...
The element 256 is even ...
The element 100000000 is even ...
No more elements left!
Iterators are really useful when have to do a (forward-) examination of a container. The following diagram illustrates the iterator pattern again. It should be noted that nothing stops us from creating a loop here, such that the iterator never reaches an end (MoveNext
will always return true
in such a case).
The only thing one needs to be aware of is that IEnumerable
is an interface. Interfaces can be implemented in classes an structures, and if we receive a structure, then passing the iterator won't work as probably expected (since without own iterators we always received classes, which are passed as references).
Let's see an example of such a problem:
static void Main()
{
List<int> squares = new List<int>(new int[] { 1, 4, 9, 16, 25, 36, 49, 64, 81, 100 });
var it = squares.GetEnumerator();
ConsumeFirstFive(it);
Console.WriteLine("The 6th entry is ... " + it.Current);
}
static void ConsumeFirstFive(IEnumerator<int> iterator)
{
int consumed = 0;
while (iterator.MoveNext() && consumed < 5)
consumed++;
}
The expected outcome is 36, however, the real outcome is 0. This means the iterator has not been moved / touched yet. How can that be? We used var
to hide the real type of the iterator, which happens to be List<int>.Enumerator
or the nested struct Enumerator
of List<int>
.
Apparently, somebody believed that implementing an iterator in a struct
might have some advantages. Indeed this is true, however, since the usually generated IEnumerator<T>
implementation is a class
, the confusion might be a problem at this point.
Understanding Co- and Contra-Variance
Generics in C# are really great, but without co- and contra-variance, they did not shine so bright all the time. The following code should illustrate the issue:
abstract class Vehicle
{
public bool IsParked { get; set; }
}
class Car : Vehicle { }
class Truck : Vehicle { }
void ParkVehicles(IEnumerable<Vehicle> vehicles)
{
}
static void Main()
{
List<Car> carpool = new List<Car>();
carpool.Add(new Car());
carpool.Add(new Car());
ParkVehicles(carpool);
}
Now from our point of view, it is obvious that a list of cars is a list of (more specialized) vehicles. So an IEnumerable<Car>
could be used as an IEnumerable<Vehicle>
. However, the same argument could then be said about List<Car>
. Now we see that this might lead to a problem, since if we could actually use the List<Car>
as a List<Vehicle>
, then we could also store Truck
instances in it. Therefore, we can say that what we actually want is:
- We want to treat
List<Car>
as List<Vehicle>
for reading. - We do not want to treat
List<Car>
as List<Vehicle>
for writing.
So what does that mean? In our case, what we want is a co-variant treatment of the T
in IEnumerable<T>
. Something is called co-variant if it preserves the ordering of types, i.e., from more specific to more generic. We have already seen covariance from the first moment on. Remember the following:
object str = "I am a string stored in a variable of type object!";
In contrast, we talk about contra-variant if the ordering is reversed, i.e., types are ordered from more generic to more specific ones. This does not happen in the type system, where we would need to write:
string o = new object();
However, there are some cases where the latter is quite useful. Remember delegates? Usually, we want to put the least specific type on the input parameters (i.e., if a Vehicle
contains all properties we need, why should we constraint the parameter to Car
?) and the most specific type on the return parameter (why should we return Object
when the type is actually a Car
?).
Therefore, input parameters are a perfect match for contra-variant constraints (specific to general), output parameters are a perfect match for co-variant constraints (general to specific). If the type has to stay the same (not moving up or down in the hierarchy), then it is called invariant.
The solution in case of the IEnumerable
was to do the following:
public interface IEnumerable<out T>
{
}
This was not done with List<T>
, since we also have write-operations inside. Of course, the given delegates are also co- and contra-variant (since C# 4):
public TReturn Func<in T1, in T2, ..., out TReturn>(T1 arg1, T2 arg2, ...);
The key questions now are:
- Where is this built-in? Which interfaces have been updated since .NET 3.5?
- When to use co- and contra-variance?
The first question has a simple (and yet deep) answer: All interfaces where either co- or contra-variance is useful have been updated. The IEnumerable<T>
has been updated (co-variant, i.e., IEnumerable<out T>
), since data is only received. On the other hand, interfaces like List<T>
have been kept invariant, since data is about to be changed inside. We also have some contra-variant interfaces, as for instance the IComparer<T>
has been changed to IComparer<in T>
. Here data is only sent, making it a perfect fit for input-parameters-like-behavior.
This already answers part of the second question: We should use this concept for generics (of course!) when we have an interface (we should not use it on full classes due to maintainability and possible problems) that consists only of functions that either only return (co-variance) parameters of a certain type, or which only receive parameters of a certain type.
This statement can be extended to generic functions and delegates as well. Once we construct a type or signature of a function that uses these types only as input or (this or is exclusive!) output, we benefit (in the long run) by decorating the type argument as being co- or contra-variant.
Using Attributes Effectively
Another novel features of C#, which is actually part of the Common Language Runtime, is the notion of Attributes. Attributes are meta-data about code, communications from the programmer to the compiler and run-time system. In C, such things are often done with pragmas and non-standard keywords; in Java, marker interfaces are used. Attributes are much richer than any similar capability in comparable languages. They also add a very complex facet to the language, one which many developers will not need. Fortunately, it is possible to write many C# programs without uses attributes at all.
Let's recap what attribute we have already seen: When we introduced enumerations in C# we had a look at the [Flags]
attribute, which marks an enumeration as being a bit-flag. Consequently, this enables a more optimized string
generation and a gives other programmers a hint that combinations of values are supported or even expected.
Another very useful attribute is [Serializable]
. This attribute tells the compiler that instances of the given class
or struct
are allowed to be serialized into bytes. We will not go into details here.
Instead, we will focus on a declarative set of attributes. In Windows Forms development, one will (sooner or later) start building custom controls. We already touched this topic shortly. A question that has not been answered in the section, which introduced the concept of creating custom controls, was: "How does the integrated designer know the category or default value of a property?". When we have a close look at the property dialog, we will see that by default, it is ordered by the ascending names of the properties. However, the usual preference is to order the entries in this dialog not only by name, but also by the different categories given by the properties of the control.
The answer of this riddle seems to be in strong relation with attributes (or why is it being answered in this section?). Let's see what kind of attributes are available there by looking at some properties:
public class MyControl : Control
{
[Category("Universe")]
[Description("The answer to life, universe and everything.")]
[DefaultValue(42)]
public int AnswerToEverything
{
get;
set;
}
}
The given attributes are defined in the namespace System.ComponentModel
. This set of attributes allows us to do most of the specification work when defining properties in our own controls. Additionally, DefaultProperty
(if the given property should be selected by default), DefaultEvent
(if the given event should be selected by default), DisplayName
(the name that is displayed by the designer) and Localizable
(this property should be localized) are very useful attributes in some scenarios. The following image shows the result as it is shown by the integrated designer.
Before we can go on with more complicated example, we need to see what attributes actually are and how when can use them. Attributes are a type of meta-data that will be accessed by the compiler (special attributes) or can be accessed during runtime over reflection (there it is again!). These attributes are quite tricky. They are instance independent and therefore only type dependent.
Every attribute has to inherit from Attribute
. This is very similar to Exception
. However, compared to Exception
objects, Attribute
objects have some restrictions. In their prime usage (as attributes), they can only have independent constructor parameters. Therefore, we cannot pass in delegates or instance dependent references. The reason is that the attribute instance is created at compile-time, without a running application. So any reference to an application / instance dependent variable would not be resolvable.
Having that said, let's create a very simple attribute:
[AttributeUsage(AttributeTargets.Class)]
public class LinkAttribute : Attribute
{
public LinkAttribute(string url)
{
Url = url;
}
public string Url { get; private set; }
}
The interesting thing here is that we use an attribute to mark this attribute. Here, we use the AttributeUsage
attribute to mark our Link
attribute to be only applicable for classes. Such a statement is not required, however, quite useful from time to time.
Using this attribute works as expected:
[Link("http://www.google.com")]
public class Google : Webpage
{
}
As we can see, the constructor is now implicitly called. We also see that the word Attribute
has been removed. This is a convention and not required. We could also call the attribute by using LinkAttribute
as the name instead of just Link
. Since our own attributes do not represent meaningful content for the compiler we need to read them out manually. It has already been said that attributes and reflection belong together, so we know that the way to read those attributes is by using reflection.
public string GetLink(Webpage page)
{
var attrs = page.GetType().GetCustomAttributes(typeof(LinkAttribute), false);
if (attrs.Length == 0)
return string.Empty;
return ((LinkAttribute)attrs[0]).Url;
}
Here, we should note that we explicitly set our attributes to be allowed multiple times or not. By default, an attribute can occur multiple times. Also attributes are never required, so there is always the possibility of having no attribute. These cases need to be covered.
Elegant Binding
Modern UI applications make heavy use of binding possibilities. Binding is a way of coupling one value to some other. In case of a graphical user interface, we want to make a connection between the value that is displayed on the screen and the real value in the right spot. In general, this problem is not solvable. There is no CPU instruction that is triggered on access of a certain address.
Therefore, this problem is solved in the programming language. In C/C++ getter and setter methods have been introduced (by the programmer). In C#, the concept of properties is offered to the programmer. We already discussed the advantages of this approach. Now we still have 2 problems:
- While reading the value could also go directly over the field (in the corresponding class), writing accesses should always go over the property. Otherwise logic or update routines will not be executed.
- Just having a property does not solve the problem. The property does also have to trigger some update logic or call another method.
While the first problem is just in our hands (always remember to change UI dependent values only over the property / in a way that the UI knows that the value has been changed), the second problem is not our problem anymore. Why that?
There are a lot of (very good) frameworks or libraries out there that manage this binding quite nicely. We will now look at the in-built binding capabilities of the WPF UI framework, which is (in a way) the successor of the Windows Forms UI framework. WPF includes an interface called INotifyPropertyChanged
. If we implement this interface in our class, we need to implement an event called PropertyChanged
. Usually, it looks similar to this:
class MyData : INotifyPropertyChanged
{
public event PropertyChangedEventHandler PropertyChanged;
void RaisePropertyChanged(string propertyName)
{
if (PropertyChanged != null)
{
PropertyChanged(this, new PropertyChangedEventArgs(propertyName));
}
}
}
If we now want to use this in combination with a property, we would write code like this:
int answer;
public int Answer
{
get { return answer; }
set
{
answer = value;
RaisePropertyChanged("Answer");
}
}
The WPF framework takes care of the rest. It is possible to bind instances of our own data classes to the UI. WPF will display the values and allow the user to change them. Each change will result in the property being called. Finally, the UI is updated when (bound) properties are raising their changed events.
The problem with this approach is that we always have to write the name of the property twice. Once as an identifier and a second time as a string
. The problem is not in the additional typing, but in the (huge) possibility of mistyping the name. Since this is just a string
, we do not have a compiler check if the given string
is a valid property or the property that we actually are referring to. While the first kind of problem could be checked with automatic tests, the second one is much harder to detect.
A very good solution to this problem is given by the following code snippet:
protected void RaisePropertyChanged([CallerMemberName] string propertyName = null)
{
if (PropertyChanged != null)
PropertyChanged(this, new PropertyChangedEventArgs(propertyName));
}
Here, we are using a special kind of attribute, which tells the compiler to replace the name of the argument with the (member) name of the caller (i.e., the name of the property) if no argument has given. So our property from before could be changed to:
public int Answer
{
get { return answer; }
set
{
answer = value;
RaisePropertyChanged();
}
}
This version is shorter and less error-prone. There is, however, also a second version (which does not require attributes). While this first version is short, fast and robust, it is also not so flexible and quite limited. For instance, we could not pass in the name of other attributes in a robust way. Here, we still would have to fall back to the string version, which is not very robust. Another limitation is that a huge load of work is required to read out the actual value. Both of these problems are attacked by using the Expression
class.
Let's see an example first:
public static string GetPropertyName<TClass, TProperty>
(Expression<Func<TClass, TProperty>> expression)
{
MemberExpression memberExp;
if (TryFindMemberExpression(expression.Body, out memberExp))
{
var memberNames = new Stack<string>();
do
{
memberNames.Push(memberExp.Member.Name);
} while (TryFindMemberExpression(memberExp.Expression, out memberExp));
return string.Join(".", memberNames.ToArray());
}
return string.Empty;
}
static bool TryFindMemberExpression(Expression exp, out MemberExpression memberExp)
{
memberExp = exp as MemberExpression;
if (memberExp != null)
return true;
if (IsConversion(exp) && exp is UnaryExpression)
{
memberExp = ((UnaryExpression)exp).Operand as MemberExpression;
if (memberExp != null)
return true;
}
return false;
}
static bool IsConversion(Expression exp)
{
return (exp.NodeType == ExpressionType.Convert ||
exp.NodeType == ExpressionType.ConvertChecked);
}
This snippet can now work on codes like:
public int Answer
{
get { return answer; }
set
{
answer = value;
RaisePropertyChanged(this => Answer);
}
}
with RaisePropertyChanged
being changed to:
void RaisePropertyChanged<TClass, TProperty>(Expression<Func<TClass, TProperty>> expression)
{
if (PropertyChanged != null)
{
var propertyName = GetPropertyName(expression);
PropertyChanged(this, new PropertyChangedEventArgs(propertyName));
}
}
The big advantage of this construct is that we can still get the value with no problem. All we have to do is evaluate the given method:
static void RaisePropertyChanged<TClass, TProperty>
(TClass source, Func<TClass, TProperty> expression) where TClass : MyData
{
if (PropertyChanged != null)
{
string propertyName = GetPropertyName(expression);
var f = expression.Compile();
TProperty propertyValue = f(source);
PropertyChanged(source, new PropertyChangedEventArgs(propertyName));
}
}
Here, we combine several advantages and obtain an extensive solution. To conclude this section: If we only need to know the name of the calling property we can use the compiler attribute, otherwise we might want to use the Expression
based one.
Of course, if we are not interested in the property value, then the previously described pattern is not interesting at all. More interesting would be an improved version of the originally presented RaisePropertyChanged
implementation. Here, the following works quite well as an intermediate layer:
protected bool SetProperty<T>
(ref T field, T value, [CallerMemberName] string propertyName = null)
{
if (!object.Equals(storage, field))
{
field = value;
OnPropertyChanged(propertyName);
return true;
}
return false;
}
This snippet can now work on codes like:
public int Answer
{
get { return answer; }
set { SetProperty(ref answer, value); }
}
As a general remark with C# 6, we should use the `nameof
` operator once we need to supply string
s corresponding to type, property, or method (among others) names.
Unsafe Code
C# also has the possibility to access the memory directly in a native-like manner. Consider the following C code, where byte
is a typedef for char
(1 byte):
int main() {
int a = 35181;
byte* b = (byte*)&a;
byte b1 = *b;
byte b2 = *(b + 1);
byte b3 = *(b + 2);
byte b4 = *(b + 3);
return 0;
}
This code would allow us to split the 4 byte integer type into its components. Nicely though, this code also is perfectly valid C# code!
unsafe static void Main()
{
int a = 35181;
byte* b = (byte*)&a;
byte b1 = *b;
byte b2 = *(b + 1);
byte b3 = *(b + 2);
byte b4 = *(b + 3);
}
Unfortunately (or fortunately as we will see later), this does not work out of the box, since the default compilation option excludes unsafe contexts. We have to enable the "Allow unsafe code" option in the corresponding project's properties. The corresponding dialog is shown in the following image:
There are some reasons for requiring this option:
- Some platforms run C# only in managed mode, forbidding any unsafe compilation. Hence code written with
unsafe
blocks will not run there. - The general kind of objects is managed, which means they can be repositioned in memory. If we have (fixed) pointers to their addresses, we might end up in invalid memory (segmentation fault), since these objects might have been repositioned without us knowing about it.
- Unsafe code cannot be optimized like managed code. This is not as bad as assembly code in C (from an optimization point of view), but can have negative performance implications.
However, unsafe code might also bring the cure to some performance problems. In unsafe blocks, we could iterate over an array without using the C# index operator. This is good, since the C# index operator is causing some overhead by checking if it is within the bounds. In some cases, this might be a huge issue, like in analyzing a bitmap.
Before we will discuss this example for an effective usage of the unsafe
keyword, let's discuss some other properties (or features) of unsafe
scopes. We can create full scopes, methods or definitions (like a class
) using the unsafe
keyword:
unsafe class MyClass
{
}
class MyOtherClass
{
unsafe void PerformanceCriticalMethod()
{
}
}
class MyThirdClass
{
void ImportantMethod()
{
unsafe
{
}
}
}
One pitfall of unsafe
code that has already been mentioned is fixed (pun intended) by using the fixed
keyword.
static int x;
unsafe static void F(int* p)
{
*p = 1;
}
static void Main()
{
int[] a = new int[10];
unsafe
{
fixed (int* p = &x) F(p);
fixed (int* p = &a[0]) F(p);
fixed (int* p = a) F(p);
}
}
A fixed
statement is used to fix an array so its address can be passed to a method that takes a pointer. A very convenient use of this (or unsafe
blocks in general) is iterating over an array by using pointer arithmetics. Therefore, we could use this to fill all entries in multi-dimensional arrays like in the following example:
static void Main()
{
int[,,] a = new int[2,3,4];
unsafe
{
fixed (int* p = a)
{
for (int i = 0; i < a.Length; ++i)
p[i] = i;
}
}
}
One last thing (before we discuss the example of accessing bitmap data efficiently) that is interesting about such unsafe
code is the possibility of replacing new
with stackalloc
. The problem is the following: What if we want to allocate memory from the call stack to be used by an array of an elementary (or unmanaged) type (like char
, int
, double
, ...)? Right now every array is naturally placed on the heap, where it is managed by the garbage collector (there are ways around it, which will be discussed in the next section).
char* buffer = stackalloc char[16];
The stackalloc
keyword can only be used within unsafe
contexts. The allocated memory is automatically discarded when the scope is left. Basically, the keyword is equivalent to new
(from our perspective), with the difference that a clean-up is performed instantly after the scope, maybe resulting in less garbage collector pressure and better performance.
This corresponds to the alloca
function, an extension commonly found in C and C++ implementations.
Coming back to the promised example of getting performance boost while manipulating images using an unsafe block. The problem is that the only access in C# to bitmap data (in the Bitmap
class) is given by iterating over a 2D array. Each value is then a structure with multiple values for the colors. The whole situation depends naturally on the bitmap (e.g., how many bytes per color) itself.
var path = @"1680x1050_1mb.jpg";
var bmp = (Bitmap)Bitmap.FromFile(path);
var sw = Stopwatch.StartNew();
for (var i = 0; i < bmp.Width; i++)
{
for (var j = 0; j < bmp.Height; j++)
{
bmp.GetPixel(i, j);
}
}
sw.Stop();
Console.WriteLine(sw);
This (pointless) code takes about 2550 ms on my machine. Let's use some unsafe code to speed it up!
var path = @"1680x1050_1mb.jpg";
var bmp = (Bitmap)Bitmap.FromFile(path);
var data = bmp.LockBits(new Rectangle(Point.Empty, bmp.Size),
ImageLockMode.ReadOnly, bmp.PixelFormat);
var bpp = data.Stride / data.Width;
var sw = Stopwatch.StartNew();
unsafe
{
byte* scan0 = (byte*)data.Scan0.ToPointer();
byte* scan = scan0;
for (var i = 0; i < bmp.Width; i++)
{
for (var j = 0; j < bmp.Height; j++)
scan += bpp;
}
}
sw.Stop();
bmp.UnlockBits(data);
Console.WriteLine(sw);
On my machine, this code runs in 335ms, or about 8 times faster. One of the biggest drawbacks is that we still have 2 loops. Of course, this could be condensed into one loop, however, we now want to be even faster without going into unmanaged code.
How can we achieve this? The magic word is: Interoperability (or short interop). COM interop is a technology included in the .NET CLR that enables COM objects to interact with .NET objects, and vice versa. We can use this to let native code do some array copying. Afterwards, we access the whole byte data of the image in a linear array.
var path = @"1680x1050_1mb.jpg";
var bmp = (Bitmap)Bitmap.FromFile(path);
var data = bmp.LockBits(new Rectangle(Point.Empty, bmp.Size),
ImageLockMode.ReadOnly, bmp.PixelFormat);
var bpp = data.Stride / data.Width;
var bytes = data.Stride * bmp.Height;
byte value;
var sw = Stopwatch.StartNew();
var ptr = data.Scan0;
var values = new byte[bytes];
Marshal.Copy(ptr, values, 0, bytes);
for (var i = 0; i < bytes; i += bpp)
value = values[i];
sw.Stop();
bmp.UnlockBits(data);
Console.WriteLine(sw);
Here, we start copying at the given address specified by the IntPtr
instance. As destination, we hand over the byte array called values
, with an offset of 0. In total, we want to receive bytes
bytes.
This code runs in just 14 ms, which is about 20 times as fast as before and 160 times as fast as the first version. Regardless of this performance gain, we should still note that no actual work has been done in any of those three scenarios, so the second or the third code have both their good and bad sides.
The problem with the third one is that changes on the image have to be transferred back to the original data source. So here, we have a lot of memory transfers going on, with duplicated memory along the way.
The fastest solution is to use the unsafe
variant with caching of the properties. The little tweak is to use the following code inside the unsafe
block:
byte* scan0 = (byte*)data.Scan0.ToPointer();
byte* scan = scan0;
var width = bmp.Width;
var height = bmp.Height;
for (var i = 0; i < width; i++)
{
for (var j = 0; j < height; j++)
scan += bpp;
}
Now we are just above 2 ms, which is close to another magnitude in speed and nearly 3 magnitudes faster than the original version. It turns out that the real performance optimization was something always lying in front of us - caching C# property access to avoid (redundant) expensive computations.
Communication Between Native and Managed Code
We already touched the topic of COM interop at the end of the last section. In this section, we want to see how we can actually use it and when COM interop becomes quite useful. We will not create own COM interop components or interfaces.
COM interop is one possible way to access functions of native applications from C#. Therefore, it is also the way to access the Win32 API from C#. Since the Windows kernel is mostly written in pure C the API is also mostly C like. This means we have a big number of parameters, with return values sometimes also given in form of (to speak some C# here: out
) parameters.
Now before we turn to code, we have to realize some truths:
- Many API calls are based on constant values (from enumerations). The problem is that C does not support encapsulation of enumeration values as C# does, resulting in us having to know the exact values of the constants. This means we need to rewrite the enumeration (at least partially) again in C# (sometimes just knowing a single constant is sufficient, however, most of the time we are interested in multiple values).
- C also does not support a
delegate
type. Nevertheless, sometimes we have to hand in callback functions. These functions require some signature that would not be checked as explicitly as with a delegate in C#. Needless to say, we should pass in correct signatures, which means we have also to build required delegates (additionally to enumerations). - Some API calls make heavy usage of structures. In C, the only way of encapsulating data is given by forming structures. I think from the two points above, it is quite obvious that we have to rebuild these
struct
types in C#.
The last point is actually quite interesting. While a delegate
is just a way of ensuring a certain signature and a constant is just a number, a struct
is an actual object with a memory address. This object might be moved by the garbage collector or (more likely and worse) structured differently than in C. If we construct the following struct
in C (code given in C#).
struct Example
{
public int a;
public double b;
public byte c;
}
we know that it will be 13 bytes. Even more, we also know that the first 4 bytes are representing an integer, the next 8 a double and the last one a single byte. This order is guaranteed in C, however, in C# it is not. Needless to say, we require the guaranteed order for communicating with an API that has been written in C.
There is a neat trick to achieve this in C#. Actually it is not a trick, it is just an attribute to tell the compiler to preserve the ordering.
[StructLayout(LayoutKind.Sequential)]
struct Example
{
public int a;
public double b;
public byte c;
}
According to the official specification, this attribute is applied automatically to structures, at least when they are used for interop. Nevertheless, the recommendation is to use the attribute explicitly just to ensure that everything works as expected. There are other ways to use the StructLayoutAttribute
attribute. Another popular way is to specify each field by hand:
[StructLayout(LayoutKind.Explicit, Pack=1, CharSet=CharSet.Unicode)]
struct Example
{
[FieldOffset(0)]
public int a;
[FieldOffset(4)]
public double b;
[FieldOffset(12)]
public byte c;
}
This could also be used to produce union-like structures in C#:
[StructLayout(LayoutKind.Explicit, Pack=1)]
struct Number
{
[FieldOffset(0)]
public int Value;
[FieldOffset(0)]
public byte First;
[FieldOffset(1)]
public byte Second;
[FieldOffset(2)]
public byte Third;
[FieldOffset(3)]
public byte Fourth;
}
var num = new Number();
num.Value = 482912781;
A union is no new concept. In fact, unions are pretty common in C. The following picture demonstrates what's going on in the memory:
Now that we have an impression on how to construct parameters for interop calls, we should look at an actual interop call: We want to access the native GDI API for reading out a certain value (color) of a pixel on the screen.
[DllImport("gdi32.dll")]
static extern uint GetPixel(IntPtr handle, int x, int y);
That's all! If we call the GetPixel
method now, the system will look for the file gdi32.dll. It starts in the current directory and continuous in all set paths. Finally, it will be found in the System32 directory of Windows itself. After the DLL has been loaded, it does not have to be loaded again.
Everything here seems clear, but wait - what is the address of the handle where we want to get the pixel from? If it is the screen, how do we get the address? Every int*
is packed in a managed IntPtr
instance. We know that getting the handle of any Control
is possible by using the Handle
property, however, we do not have a "screen" control.
Two more methods have to be imported to get this handle:
[DllImport("gdi32.dll")]
static extern IntPtr CreateDC(string driver, string device, string output, IntPtr data);
[DllImport("gdi32.dll")]
static extern bool DeleteDC(IntPtr handle);
Now we can write the desired function to get the color of an arbitrary pixel on the screen.
public Color GetScreenPixel(int x, int y)
{
var screen = CreateDC("Display", null, null, IntPtr.Zero);
var value = GetPixel(screen, x, y);
var pixel = Color.FromArgb(value & 0xFF,
(value & 0xFF00) >> 8, (value & 0xFF0000) >> 16);
DeleteDC(screen);
return pixel;
}
The conversion of the integer is required since the API returns a color in the format ABGR (red and blue swapped). If we want to make a screenshot using this method, we will see that every API call requires some time resulting in a really bad performance. Even grabbing 32x32 pixels (1024 API calls) requires about 20 seconds. This means that every P/Invoke call costs us about 20 milliseconds.
The DLL import attribute constructor has some other interesting options like CallingConvention
, which uses the CallingConvention
enumeration. Here, we have possibilities like StdCall
(default value) or Winapi
(takes the default value from the OS) or Cdecl
. The last one supports a variable number of arguments, i.e., it is the best fit for C functions with a variable number of arguments:
[DllImport("msvcrt.dll", CharSet=CharSet.Unicode, CallingConvention=CallingConvention.Cdecl)]
public static extern int printf(String format, int i, double d);
Additionally, there is ThisCall
, which could be used to communicate with C++ methods. Here the pointer to the class itself (this
) is the first argument - it will then be placed in a special register by convention.
There are a lot more things to mention (as usual), however, in a nutshell everything builds up on the basics that have been discussed in this section. To conclude:
- In communication with native code, always think about only using objects and data types that are the closest to the original. If we pass in instances of classes and other CLR objects, then the possibility of having exceptions or false return values is quite big.
- We should think twice before making a native call. If we do that, we should do it only a few times. Therefore, searching for already managed code alternatives is always a good start.
- Always think about the possibility that the required DLL is not on the target system or that we have a typo in the name of the library. Better check twice here.
The references include a link to the page PInvoke.net. Here the most common Windows API calls are described with interop code snippets. This is a great source for getting API access and more without having to dig through the MSDN for hours.
Effective C#
The most important rule in writing efficient C# code is to remember that we are actually bound to CLR types with (some) overhead and managed memory. Now that we think about actual performance optimizations, we should always respect code maintainability and readability before optimizations. After everything works just fine, we can have a look at the actual performance. If the result is satisfying, there is no reason to keep working. Otherwise, we can use a tool like PerfView
to investigate further.
PerfView
lets us investigate CPU usage, managed memory, blocked time and hotspot investigation. Here, we see the so-called hot path in our code directly. This is very important, since we do not want to waste time on optimizing unimportant methods or algorithms. Once we go on to do some optimization, we should do think about the following items of the following (not complete) list:
- Optimizing parameters
- Using
static
fields and static
methods - Reducing function calls and using
switch
- Flatten arrays
- Specifying the capacity for collections
- Using
struct
or stop using it - Reducing
string
instantiations and reduce ToString
calls - Knowing when to use
StringBuilder
and reusing it
The first item, optimizing parameters, is meant to reduce the number of parameters to the least required number of parameters. Even though this sounds trivial, it is often not done correctly. Consider the following code:
int Multiply(int x, int y)
{
return Multiply(x, y, 1);
}
int Multiply(int x, int y, int z)
{
return x * y * z;
}
Of course, here we make multiple performance mistakes at once. First of all, we are not using static
methods even though these methods do not rely on any global variables. Also, we are invoking another methods call Multiply
in Multiply
. Additionally, the last one places even more elements on the stack (another copy of x
and y
) with the (in this case) not required variable z
. Let's improve this:
static int Multiply(int x, int y)
{
return x * y;
}
static int Multiply(int x, int y, int z)
{
return x * y * z;
}
Of course, this makes the code less maintainable (if we change something in the method with three parameters, we will have (most likely) make this change also on the version with two parameters), however, the performance is optimized. We could also think about using ref
parameters to prevent the local copy, but then (instead of the value) we would just receive a copy of the pointer (which will be even bigger since an int
is 4 bytes while a pointer is 8 bytes on a (de-facto standard) 64-bit system).
When we call any method that was not inlined by the JIT, a copy of the variable (which is the value in case of struct
variables and the pointer in case of class
variables) will be used. This causes stack memory operations. Therefore: It is faster to minimize arguments, and even use constants in the called methods instead of passing them arguments.
Another point we have seen in the example above is the usage of static
methods. We should always mark methods as static
when they are independent of the instance (no global variables or instance dependent methods are used). Static
fields are also faster than instance fields, for the same reason that static
methods are faster than instance methods. When we load a static
field into memory, the runtime does not have to resolve the instance expression. Static
methods do not require the this
pointer to be set by the runtime.
Inlining methods is something very common to C/C++ developers. In C#, no inline
directive or keyword exists. Nevertheless, some inlining will be done under certain circumstances by the JIT, which is often conservative. It will not inline medium-sized or large methods and it strongly prefers static
methods.
A very important performance optimization is the heavy usage of switch
. switch
is like a less flexible and more assembly like statement than if
. Let's compare the following two code snippets:
if (a == 3)
{
}
else if (a == 4)
{
}
else
{
}
switch(a)
{
case 3:
break;
case 4:
break;
default:
break;
}
The first question would be about the readability: This is really a matter of taste, however, I think that the switch
version has some benefits. The real advantage now lies in its performance. While the if
version will do several compares and jumps internally, the switch
will just do a massive compare and jump. Using jump tables makes switch
es much faster than some if
-statements. Also, using a char
switch on a string
is very fast. In general, we might say that using char
arrays is sometimes the fastest way to build up and examine a string
.
The topic of arrays brings us to the potential performance optimization by flattening arrays. In a short micro-benchmark, we could see that multi-dimensional arrays are relatively slow. Therefore, creating a one-dimensional array and accessing it through arithmetic can boost the performance significantly. This dimensional reduction is called flattening an array.
Let's see an example of computing the sum of a 2D array of doubles:
double ComputeSum(double[,] oldMatrix)
{
int n = oldMatrix.GetLength(0);
int m = oldMatrix.GetLength(1);
double sum = 0.0;
for (int i = 0; i < n; i++)
for (int j = 0; j < m; j++)
sum += oldMatrix[i, j];
return sum;
}
double ComputeSum(double[] matrix)
{
int n = matrix.Length;
double sum = 0.0;
for (int i = 0; i < n; i++)
sum += matrix[i];
return sum;
}
For collections, we should try to find the optimum value for the optional capacity argument. This argument is able to influence the initial buffer sizes. A good choice helps to avoid many allocations when appending elements. In general, collections are slower than fixed arrays. A List<double>
is 50% to 300% slower than a double[]
depending on the compilation optimizations.
A critical and difficult performance optimization is the topic of using structures. While they can improve the performance of the GC by reducing the number of distinct objects, they are also always passed to methods by value. Therefore, Microsoft applies the rule that struct
types should not contain more than 16 bytes in data. They also should only be used for creating really elementary data types.
It is quite inefficient to use the ToString
method too often. First of all, it's another method call (there is some price tag on this), additionally, we are creating a string
instance! This allocation requires some memory and puts some pressure on the GC. A really inefficient example is shown below:
static bool StringStartsWithA(string s)
{
return s[0].ToString() == "a" || s[0].ToString() == "A";
}
A much more improved version would look like this:
static bool StringStartsWithA(string s)
{
return s.Length > 0 && char.ToLower(s[0]) == 'a';
}
This does not look like much improvement, but actually we did a lot of things:
- We reduced 2
ToString
calls - We omitted 2
string
creations - We made the code more stable (what if the
string
is empty?) (Note: Checking for null
could also be good) - We are comparing characters instead of strings
Those string
conversions are quite nasty. We should try to avoid them as often as possible. For example, we may need to ensure that a string
is lowercased. If the string
is already lowercase, we should avoid allocating a new string
entirely. This is quite the same scenario as above, only that instead of a single character we would compare against a single string
(lower case). This should be done character-wise.
We already discussed that a StringBuilder
object is much faster at appending string
s together than using Concat
with string
s. However, StringBuilder
is only the far better choice with big or many string
s. If we have only a few concatenations or very small string
s (2 or characters), then creating a whole StringBuilder
instance is too much. If we need to use the StringBuilder
quite often, we should think about creating a StringBuilder
spooler, where just a few instances (2 to 3 are usually more than enough) are used. Those instances can then be recycled.
Last but not least, we could also think about replacing divisions in our code or creating string
tables for many used int
values. Those are quite effective changes, however, they are quite time-consuming to implement. C# is relatively slow when it comes to division operations. One alternative is to replace divisions with a multiplication-shift operation to optimize the performance. Therefore, we have something like the following:
static int Divide(int num)
{
return num / div;
}
static int MulShift(int num)
{
return (num * mul) >> shift;
}
Now the hard part is to compute mul
and shift
for the specific value of div
. In general, it is there even better to avoid divisions as often as possible:
static Point3 NormalizePointInefficient(Point3 p)
{
double norm = Math.Sqrt(p.X * p.X + p.Y * p.Y + p.Z * p.Z);
return new Point(p.X / norm, p.Y / norm, p.Z / norm);
}
static Point3 NormalizePointImproved(Point3 p)
{
double norm = 1.0 / Math.Sqrt(p.X * p.X + p.Y * p.Y + p.Z * p.Z);
return new Point(p.X * norm, p.Y * norm, p.Z * norm);
}
In conclusion, we can say that performance optimization is highly dependent on the application one is writing. Usually (in C#), we want to preserve readability where possible, however, if we experience performance drawbacks, we still have some powerful weapons to attack those issues. The most important concept is finding out what methods eat the performance and optimizing those in a priority list.
Outlook
This time there is no outlook! Unless I somehow change my mind in the future, this will be the last part in this series of tutorials on programming with C#. I hope I could show you that C# is a modern, effective (measured in time spent per project) and elegant programming language, which provides a clear structure and robust code-basis.
Even though C# (usually) produces managed code, we are able to bring in some optimizations and use (external) native code for performance critical parts. Nowadays, people are playing around with C# to native code compilers and it seems that even whole operating systems could be developed purely in C# without huge performance drawbacks.
Other Articles in this Series
- Lecture Notes Part 1 of 4 - An Advanced Introduction to C#
- Lecture Notes Part 2 of 4 - Mastering C#
- Lecture Notes Part 3 of 4 - Advanced Programming With C#
- Lecture Notes Part 4 of 4 - Professional Techniques for C#
References
History
- v1.0.0 | Initial release | 21st April, 2016
- v1.0.1 | Added article list | 22nd April, 2016
- v1.0.2 | Updated some typos | 23rd April, 2016
- v1.1.0 | Updated structure w. anchors | 25th April, 2016
- v1.1.1 | Added Table of Contents | 29th April, 2016
- v1.2.0 | Thanks to Christian Andritzky for pointing out the bitmap perf. problem | 10th May, 2016
- v1.3.0 | Thanks to Alexey KK for mentioning the
INotifyPropertyChanged
issue | 13th May, 2016