Click here to Skip to main content
65,938 articles
CodeProject is changing. Read more.
Articles
(untagged)

Service Based Logging Solution

0.00/5 (No votes)
24 Jun 2014 1  
Introducing a centralized two stage log4net logging solution and its real-time notification API.

Related Articles

Contents

Introduction ^

Logging is one of the important aspects of many software systems. It can be used, for example, to

  • Monitoring and/or auditing the behavior of the system and/or its users. Logging information provides a "flattened" and time sensitive view of the execution flow of the system. Provided that enough information are recorded, it can be post-analyzed (e.g., using data mining technologies) to reveal many aspects of the system that is hard to analyze at the design time.
  • Debugging potential problems of a system under development. This is required in many scenarios like in debugging the concurrent, real-time behavior of a distributed and/or multi-threaded system in which many "processes" are going on in parallel, stopping one using a debugger will break the timing sensitive environment in which the very debugged feature depends upon.
  • Profiling of software, especially those distributed ones, base on instrumentation (see here).. The structured log items and native object oriented relational data graph programming API provide rich set of information and data iteration environment for profiling software to take advantage of.
  • Selectively fine tuning the performance or diagnosing potential problems of certain parts of a released system in production with or without source code, provided that the said system is properly instrumented
  • Critical events notification and processing. There are variety kinds of exceptional events that can be considered critical, like certain critical pre-failure warning or failed events, events for the occurrence of opportunity of various kinds, potential system security breach or violation events, etc., that a system may need to use logging system to notify so that more specialized agents can be assigned to handle them. These events should be processed as quick as possible so a real-time system is desired for them.

The present article describes an implementation of a lossless log4net appender that sends complete logging information to a centralized data service where structured data having identical format from various parts of a distributed system are saved without making any assumption as to how these data are going to be used at the first stage. The second stage involves client software making use of the data by taking advantage of the intelligent query API (see here) and real-time notification framework of the data service.

It introduces one of the real-time notification APIs of the logging data service using which monitoring client software or agents can get notification of data changes of interest and respond to the received events according to the needs of the system in real-time.

Background ^

log4net is one of the well established logging frameworks that .NET applications can adopt. It has a quite flexible configuration system that one can use to inject loggers into selected parts of the system (here is a tutorial). It is also easy to extend. The present article takes advantage of the said extensibility to build a service based relational logging data system.

The structured log items produced by the log4net contain rich information about the execution environment in which each log item is produced. For example, it can use the .Net diagnostic feature to extract a complete set of stack frames at the point where the logging is done. This information is important to know in various debugging or profiling scenarios.

The default appenders included in the log4net and most of the openly available custom ones follow a traditional strategy of outputting only a particular view (or layout) of the said data, text oriented in most of the time, that many important meta-information are projected out in one way or another before "seeing the light of the day". The lost information after projection will not be available to post logging or real-time subscriptive analysis. In addition a change in projection strategy later could render earlier data to be inconsistent with later ones in format and in information content. This could cause problems to automated analysis or processing tools.

Web applications are intrinsically multi-user, multi-thread and multi-process, and most likely distributed, nowadays. They progress in parallel at the same time. The new asynchronous programming pattern of recent .Net framework makes the situation "worse" since an async method could contain sections of code that are executed on different threads in a pool, scheduled by the framework, that appears to be randomly selected. A executing job is consisted of logically connected sequence of operations, forming a line of execution. These logically sequential lines of execution will be generally referred to as "logical threads" in the following. A centralized log system for these kind of systems contain interleaving entities from various parallel logical threads. As the number of threads increases, they can quickly become too complex to be identified and disentangled using traditional means (e.g., relying on our eyes and brain) to do analysis without using accurate, most likely structured, querying tools.

The following figure is snapshot of a file based log for just a one user ("user-a") in just one range based parallel download session, created by a capable download client software for a "file" of around 100 MB saved in a data service. The download is handled on the web application using .Net async programming framework. Here the number in [...] is the thread id and the number following # is a sequence number for a data block. A logical thread can be identified to be formed by records having a consecutively increasing sequence number. As it can be seen from the figure, they are mixed in an seemingly random way.

Figure: Interleaving log records where physical threads and logical threads are mixed.

For example, the sequence #1,#2,#3 ... of data blocks are handled by managed threads [24],[13],[8] ... despite the fact that all of them (including other sequences or logical threads) are processed in a loop of a async method.

The present solution has three main features:

  1. It saves all default information acquired by the log4net framework plus additional meta-information that are relevant to typical execution environment of a multi-user web application into a relational data source without the need to make early assumptions or judgments about how the data should be projected. This way, the log information can be used by various parties and for various purposes, at which stage particular projections or layouts of the data can be made,;
  2. The relational data are supported by a relational data service that contains generic user interfaces which can be used to formulate and execute structured query of arbitrary complexity with the help of an intelligent guiding system right away. The generic interface may not be enough for more demanded presentation and analysis, like in iterative data analysis or in doing data mining. To that end, a user can create his/her own analysis tools base on the client API of the data service, especially the query ones. An introduction to how this is normally done is given here.
  3. The data service contains subscription ports that a client agent can subscribe to to process logging events in real-time.

Real-time monitoring of a system has many advantages, especially for those events that are considered critical. Critical events has many kinds. There are design time expected, well understood ones that are handled by the system itself and exceptional ones that are either un-expected, less understood or too complicated to handle by the system itself at the time the system is built. They need to be either throw away or in a better designed system, delegated to an extensible end-point, like a logging system, that may or may not has external handlers.

There are other real-time exception monitoring solutions like the ElmahR project for visually monitoring of uncaught exceptions of a system. But the present solution is more general. It not only can monitor uncaught exceptions, but also monitor any log data streams of interest base on user provided filter expressions. In addition, it is more extensible due to the programmable API, which make it possible for the monitoring agents to act on received events automatically.

The Data Schema ^

Depending on its usage, logging information could be saved to different data stores. Some of the logging information could be saved to the data sets that belongs to the main database of the system, some of others, like the debug information during development, could be saved to more temporary ones that can be discarded after the particular debug task is completed. In addition, log4net allows saving the same log items to multiple stores as well.

What is shown in the following contains only the relevant data sets for the present article. They should be attached to other parts of the data source using a foreign key, namely, the AppID property, that points to an entity in a data set which contains a collection of applications. For simplicity, the data service included in this article is extended from the simple ASP.NET membership data source described in here, and here. There is no particular reasons for such a choice other than that it can save our project administration complexity by not creating another independent demo data service project.

The present implementation of the part of the data schema responsible for logging is consisted of three data sets, namely EventLogs, EventLocations and EventStackFrames. Their dependency on each other is schematically shown in the following:

Figure: New data sets attached to an existing relational data set.

Here an entity in EventLogs data set represents the main log item. It has a one to one relationship with a corresponding entity in EventLocations data set, which contains the location information of the place (line number) in code (file name) where the corresponding logging happened. An entity in EventLocations data set has one to many relationship with a sub-set of entities in EventStackFrames data set. The said sub set of entities represents a method invoking path that leads to the position in code at which the logging item is produced. These information are sometimes very important to have in debugging problems and/or tuning the performance of the system.

The following is the details of data schema for each of the above mentioned sets:

Figure: Data schema for the EventLogs data set.

Figure: Data schema for the EventLocations data set.

Figure: Data schema for the EventStackFrames data set.

The included logging data service is produced according to an extended data schema derived from the above definitions.

The Appender ^

The data ^

It is found that the native log4net log entity type LoggingEvent in log4net.Core namespace does not contain enough meta-information for us to describe the execution flow of a web application. It is extended to include more, which is encoded into the class

internal class LoggingEventWrap
{
    public LoggingEvent Evt;
    public string webUser;
    public string pageUrl;
    public string referUrl;
    public string requestId;
}

Here Evt is the native log4net log entity and the rest is a minimum set of information that is relevant to a multi-user web application. They are obtained from a set of globally registered value providers for log4net, when it is possible:

string webUserName = GlobalContext.Properties["user"].ToString();
string pageUrl = GlobalContext.Properties["pageUrl"].ToString();
string referUrl = GlobalContext.Properties["referUrl"].ToString();
string requestId = GlobalContext.Properties["requestId"].ToString();

which is registered inside of the Global.asax.cs file of the ASP.NET application

protected void Application_Start()
{
    log4net.GlobalContext.Properties["user"] = new HttpContextUserNameProvider();
    log4net.GlobalContext.Properties["pageUrl"] = new HttpRequestUrlProvider();
    log4net.GlobalContext.Properties["referUrl"] = new HttpReferringUrlProvider();
    log4net.GlobalContext.Properties["requestId"] = new HttpRequestTraceIDProvider();
    ....
}

These providers are defined inside the HttpContextInfoProvider.cs file of the project. For example

public class HttpContextUserNameProvider
{
    public override string ToString()
    {
        HttpContext c = HttpContext.Current;
        if (c != null)
        {
            if (c.User != null && c.User.Identity.IsAuthenticated)
                return c.User.Identity.Name;
            else
                return c.Request != null && c.Request.AnonymousID != null ? 
                       c.Request.AnonymousID : "Request from Unknown Users";
        }
        else
        {
            if (Thread.CurrentPrincipal.Identity.IsAuthenticated)
                return Thread.CurrentPrincipal.Identity.Name;
            else
                return "Request from Unknown Users";
        }
    }
}

The purpose of it is to get a unique id for the current user, authenticated or not so that log records from different users can be identified and separated for whatever purposes. In cases where HttpContext.Current is available, it is used to extract web-related information. For example when the user is authenticated, it returns the user name, otherwise if the web application is configured to enable anonymous identification of un-authenticated visitors, it returns the AnonymousID of the visitor to distinguish him/her from other visitors.

Note: AnonymousID is not automatically enabled for a ASP.NET application, one has to add the following node under the <system.web> inside of the Web.config file, namely

<anonymousIdentification 
         enabled="true" 
         cookieless="UseCookies" 
         cookieName=".ASPXANONYMOUS" 
         cookieTimeout="30" 
         cookiePath="/" 
         cookieRequireSSL="false" 
         cookieSlidingExpiration="true" 
         cookieProtection="All" 
/>

HttpContext.Current is not always available in an ASP.NET application. For example if a request is handled by Web API or SignalR channels, then HttpContext.Current is not available. In that case, the generic property Thread.CurrentPrincipal.Identity is used when the visitor is authenticated. However, it seems that there is not a mechanism for identifying anonymous users when HttpContext.Current is not available

The entity graph to be sent to the data service is built from an instance of the LoggingEventWrap class:

private static EventLog getEntity(LoggingEventWrap evtw)
{
    EventLog log = new EventLog();
    log.ID = Guid.NewGuid().ToString();
    log.AppAgent = evtw.Evt.UserName;
    log.AppDomain = evtw.Evt.Domain;
    log.AppID = App != null ? App.ID : null;
    log.EventLevel = evtw.Evt.Level.Name;
    if (evtw.Evt.ExceptionObject != null)
    {
        log.ExceptionInfo = excpetionToString(evtw.Evt.ExceptionObject);
        //it's important to turn this on for delay loaded properties
        log.IsExceptionInfoLoaded = true;
    }
    log.LoggerName = evtw.Evt.LoggerName;
    TracedLogMessage tmsg = null;
    if (evtw.Evt.MessageObject is TracedLogMessage)
    {
        tmsg = evtw.Evt.MessageObject as TracedLogMessage;
        log.Message_ = tmsg.Msg;
        log.CallTrackID = tmsg.ID;
    }
    else if (evtw.Evt.MessageObject is string)
        log.Message_ = evtw.Evt.MessageObject as string;
    else
        log.Message_ = evtw.Evt.RenderedMessage;
    log.ThreadName_ = evtw.Evt.ThreadName;
    log.ThreadPrincipal = evtw.Evt.Identity;
    log.TimeStamp_ = evtw.Evt.TimeStamp.Ticks;
    log.Username = evtw.webUser == null ? evtw.Evt.UserName : evtw.webUser;
    log.PageUrl = evtw.pageUrl;
    log.ReferringUrl = evtw.referUrl;
    if (tmsg == null)
        log.RequestID = evtw.requestId;
    if (evtw.Evt.Level >= Level.Debug && evtw.Evt.LocationInformation != null && 
                                                                   _recordStackFrames)
    {
        log.ChangedEventLocations = new EventLocation[] 
                                    { 
                                        getLocation(log.ID, evtw.Evt.LocationInformation) 
                                    };
    }
    return log;
}

private static EventLocation getLocation(string id, LocationInfo loc)
{
    EventLocation eloc = new EventLocation();
    eloc.EventID = id;
    eloc.ClassName_ = loc.ClassName;
    // 220 is the current FileName_ size.
    eloc.FileName_ = loc.FileName != null && loc.FileName.Length > 220 ? 
                         "..." + loc.FileName.Substring(loc.FileName.Length - 220 - 3) : 
                          loc.FileName;
    eloc.MethodName_ = loc.MethodName;
    eloc.LineNumber = loc.LineNumber;
    if (loc.StackFrames != null && loc.StackFrames.Length > 0)
    {
        List<EventStackFrame> frames = new List<EventStackFrame>();
        int frmId = 1;
        foreach (var frm in loc.StackFrames)
        {
            if (_maxStackFramesUp >= 0 &&frmId > _maxStackFramesUp)
                break;
            else if (_userStackFramesOnly && string.IsNullOrEmpty(frm.FileName))
                continue;
            EventStackFrame efrm = new EventStackFrame();
            efrm.EventID = id;
            efrm.ID = frmId++;
            efrm.ClassName_ = frm.ClassName;
            // 220 is the current FileName_ size.
            efrm.FileName_ = frm.FileName != null && frm.FileName.Length > 220 ? 
                                 "..." + frm.FileName.Substring(frm.FileName.Length - 220 - 3) : 
                                 frm.FileName;
            string callinfo = frm.Method.Name + "(";
            foreach (var p in frm.Method.Parameters)
                callinfo += p + ", ";
            callinfo = callinfo.TrimEnd(", ".ToCharArray()) + ")";
            efrm.MethodInfo = callinfo;
            frames.Add(efrm);
        }
        eloc.ChangedEventStackFrames = frames.ToArray();
    }
    return eloc;
}

As it is described in a few other related articles, in order to have a set of entities that depend on another entity to be updated together with the later entity, the dependents must be put into the

"Changed" + <dependent entity name> + "s"

property of the said entity. In our case, because EventLog entity has a set of dependent entities (only one here since the relation is one to one) of type EventLocation, EventLog entity must has a property named ChangedEventLocations whose type is EventLocation[]. Entities of type EventLocation that depend on the said entity should be put into the property ChangedEventLocations in order for them to be updated at the data service, together with the said entity. Likewise, entity of type EventLocation has a set of dependent entities of type EventStackFrame, so its corresponding set of dependent entities of type EventStackFrame, should be put into the ChangedEventStackFrames property of the entity in order for them to be updated at the data service, together with the said entity. The above two methods are used to build such an entity tree that a client can use to insert the corresponding data graph in just one call to the data service, which is able to handle primary keys, foreign key constraints and data duplications (due to the tree structure).

The actions ^

An implementation of the log4net appender can be started either from the IAppender interface or from the AppenderSkeleton class. Both of them are defined inside of the log4net package under the namespace log4net.Appender. We choose the later one here to make thing simpler since some of the standard jobs are already handled inside of the AppenderSkeleton class. All that is needed to do is implement the overloaded methods Append(LoggingEvent evt) and optionally Append(LoggingEvent[] evts).

Simple as it is, there are other concerns that we must consider when dealing with a remote data service. One of them is the fact that calling a remote data service takes more time to complete than performing local IO operations. log4net itself uses method implemented inside of the BufferingAppenderSkeleton class under the log4net.Appender namespace for such kind of appenders. It has a few issues in our application scenarios. So we would like to develop our own mechanism.

A mean must therefore be devised to hide or, if not possible at all, at least delay such effects as much as possible so that it should appears to have little actual effects in most application scenarios. One of such a scenario is when the system does not log too many items on average per unit of time to exceed the data throughput of the network and the data service, but could have temporary burst of large quantity of log items. In this case the present appender could appears to be faster than or at least as fast as many of the local log appenders.

To this end, we choose to use an asynchronous mechanism to do the data service updating in a producer-consumer setting. Its implementation, although quite standard, can be a little involved. However .NET framework already provide a ready to use one. More specifically, a bounded BlockingCollection instance from the System.Collections.Concurrent namespace can be used to hold the log items when the above mentioned Append methods are called, which acts as the producer.

private static BlockingCollection<EventLog> EventQueue
{
    get
    {
        return _eventQueue ?? (_eventQueue = new BlockingCollection<EventLog>(_maxQueueLength));
    }
}
private static BlockingCollection<EventLog> _eventQueue = null;    

It hold a collection of entity graphs of type EventLog to be sent to the data service. It takes very little time to add items to the list as long as the capacity of the said instance has not been reached.

protected override void Append(LoggingEvent evt)
{
    SetupThread();
    string webUserName = GlobalContext.Properties["user"].ToString();
    string pageUrl = GlobalContext.Properties["pageUrl"].ToString();
    string referUrl = GlobalContext.Properties["referUrl"].ToString();
    string requestId = GlobalContext.Properties["requestId"].ToString();
    var evtw = new LoggingEventWrap { 
                       Evt = evt, 
                       webUser = webUserName, 
                       pageUrl = pageUrl, 
                       referUrl = referUrl, 
                       requestId = requestId 
                   };
    if (!_lossy)
        EventQueue.Add(getEntity(evtw));
    else
        EventQueue.TryAdd(getEntity(evtw));
}

Method getEntity is used to build an entity graph to be sent to the data service, which is described in the previous sub-section. It handles the addition differently if there are still _maxQueueLength items waiting to be sent to the data service, which could happen when the speed of logging is too fast to be handled by the data service, depending upon the value of _lossy specified in the configuration. When _lossy is false, the method will be blocked at EventQueue.Add(getEntity(evtw)) statement until some items are sent to the data service, otherwise, the new log item will be dropped and the method will return immediately.

At the same time, a background thread, which acts as the consumer, is created to do the data service update in an infinite loop, waiting for the next available items after creation or after finishing updating the last block of log items.

private static void EventProcessThread()
{
    List<EventLog> block = new List<EventLog>();
    while (!stopProcessing)
    {
        EventLog e;
        while (!EventQueue.TryTake(out e, 300))
        {
            if (block.Count > 0)
                sendBlock(block);
            if (stopProcessing)
                break;
        }
        block.Add(e);
        if (block.Count >= _maxUpdateBlockSize)
            sendBlock(block);
    }
    _thread = null;
}    

What it does is it builds a local list of items to be updated:

  • When there are constant additions to the blocking collection, it will call the data service inside of the sendBlock method to send the block of log items to the data service whenever it accumulates _maxUpdateBlockSize items;
  • However when it is not busy, the blocking at EventQueue.TryTake(out e, 300) will expire in every 300 milliseconds. When this happen, it will first check if there are leftover items in the block variable, if it does, send them to the data service first; then it will check if stopProcessing signal is flagged, if it does, exit the loop otherwise keep waiting for an item to come.

The sendBlock method is:

private static void sendBlock(List<EventLog> block)
{
    try
    {
        var svc = GetService();
        svc.AddOrUpdateEntities(ClientContext.CreateCopy(), 
                                              new EventLogSet(), 
                                              block.ToArray());
    }
    catch (Exception ex)
    {
        log4net.Util.LogLog.Warn(typeof(DataServiceAppender), 
                                          excpetionToString(ex));
    }
    finally
    {
        block.Clear();
    }
}

The Monitor ^

Real-time monitoring is achieved by the push notification mechanism and interfaces opened by the data service.

Two kinds of push notification end points are supported by the data service:

  1. SignalR notifications: It does not have a service side filtering mechanism yet so all logging events are pushed to the client, making it quite noisy and less performant. It is also less reliable so it is not recommended to use for our purposes here.

    This scalable channel can be used for less reliable end user side visual display of push notifications, like on a web-pages. It is not enabled by default. One has to set

    <publish name="EventLog" disabled="false" />

    under the cud-subscriptions/clients node inside of the Web.config file of the data service. After the setting, the data service will broadcast change notifications on the NotificationHub channel. The following statement sequence is needed to set up the client monitor:

    var hubConn = new HubConnection(url);
    hubProxy = hubConn.CreateHubProxy("NotificationHub");
    hubConn.Start().Wait();
    hubProxy.Invoke("JoinGroup", EntitySetType.EventLog.ToString()).Wait();
    hubProxy.On<dynamic>("entityChanged", (e) => {
        ... handle the event ...
    });
  2. WCF callbacks: This one is more reliable and is mainly used for server to server notifications inside of a certain security boundary. It can be accurately controlled inside of a client software. Therefore it is more suitable to be used as machine to machine push notification means inside of certain security boundary.

    There is one entry point to subscribe to and receive data change notifications for each instance of a type of data source. They are done through an instance of the

    <data source name> + "DuplexServiceProxy"

    where <data source name> is the name of the data source, which in the present case is AspNetMember, that is supporting the kind of logging that requires to be performed. There could be multiple ones at the same time.

Notification handling class ^

A notification or callback handler class must implement the IServiceNotificationCallback interface defined inside of the Shared library of the logging data service.

[CallbackBehavior(ConcurrencyMode = ConcurrencyMode.Multiple, UseSynchronizationContext = false)]
public class CallbackHandler : IServiceNotificationCallback
{
    ... other members of the class ....

    public void EntityChanged(EntitySetType SetType, int Status, string Entity)
    {
        if ((Status & (int)EntityOpStatus.Added) != 0)
        {
            switch (SetType)
            {
                case EntitySetType.EventLog:
                    {
                        var ser = new DataContractJsonSerializer(typeof(EventLog));
                        byte[] bf = Encoding.UTF8.GetBytes(Entity);
                        MemoryStream strm = new MemoryStream(bf);
                        strm.Position = 0;
                        var e = ser.ReadObject(strm) as EventLog;

                        ... handle the entity ...


                    }
                    break;
                // case for other data sets, if interested...
            }
        }
    }

    ... other members of the class ....

}

Notification subscription ^

To subscribe to the data change inside of the data service, one should execute the following statements somewhere in the application, where AspNetMember data source is taken for example:

var _handler = new CallbackHandler();
var _notifier = new InstanceContext(_handler);

_notifier.Closing += ... channel closing event handler

... other _notifier related event handler ...

var svc = new AspNetMemberDuplexServiceProxy(_notifier);

Here CallbackHandler is the class defined above. One should construct the service side filter expression as the next step. Setting up service side filters is the preferred way of filtering push back events since it could increase performance significantly. Let's suppose that the monitor agent is interested in handling ERROR or FATAL level of log events, then one should construct the following filter expression:

var qexpr = new QueryExpresion
{
    FilterTks = new List<QToken>()
};
qexpr.FilterTks.Add(new QToken
{
    TkName = "EventLevel == ERROR || EventLevel == FATAL"
});

If a developer is not familiar with how such expressions can be constructed, he/she can read the introduction in some of our previous articles (see, e.g. here and here). Of course different monitoring agents may be interested in different aspects of the log events. They must use the corresponding filter with a distinct subscription identity (see the following) to do the subscription. After specifying the filter, the subscription is realized by:

var sub = new SetSubscription
{
    EntityType = EntitySetType.EventLog,
    EntityFilter = qexpr
};
svc.SubscribeToUpdates(cctx, OwnerID, SubscribeID, new SetSubscription[] { sub });

where cctx is the global instance of type CallContext for the logging data service, OwnerID is the owner identity of the subscription used to maintain the subscription, and SubscribeID is an ID that the client software needs to keep track of to manage its subscriptions. The last one, sub, specifies the type of data set to be monitored and the corresponding filter expression. If there are more than one data sets to be monitored, just construct the corresponding one for each of them in the same way. Here we are interested in EventLog data set only. Subscription can be changed or unsubscribed only by its owner. There can be only one subscription for each subscription ID, so in order not get into conflict with other subscribers of the system or trying to change an owner's subscription unintentionally, a GUID value is recommended. If the subscription is global to a particular application, then one can use, e.g., the ClientID property of the global instance of CallContext returned by the logging data service after successful SignInService during initialization of the system (see here) as both the OwnerID and SubscribeID.

When the subscription is no longer needed for some reasons, it is recommended to unsibscribed it. The following statement can be used to do the un-subscription:

var svc = new AspNetMemberDuplexServiceProxy();
svc.UnsubscribeToUpdates(cctx, OwnerID, <span lang="zh-cn">SubscriberID)</span>;

How to Use ^

Setting up the Logging Service ^

One can follow instructions given here to set up the demo log data service. Note that the data service must be run under .NET 4.5.1.

There is a Documents sub-directory under the root directory of the source code that contains a Sandcastle project that can be used to compile the documents for client site (namely the web application side) programming. But before compiling the documents, please make sure the following is done:

  • Install Sandcastle, if not yet.
  • The solution for the web application is compiled so that the xml document files are generated.
  • The default output path for the documents are ..\..\DataService\Documents\ClientAPI45\ relative to the document directory. Make sure this is an allowed output directory. If not, change it to a proper one.
  • If the above output directory is not the same as the Documents\ClientAPI45 sub-directory of the data service website, remove all the contents of the later, if any, and copy all the generated document files inside of the former one to the later one.

Recompilation ^

.Net 4.5 and above applications ^

Since the current version of log4net which is 1.2.13, is compiled under  .NET 4.0. To be consistent, one should recompile it under the current .NET framework, that is .NET 4.5.1, from its source code available here. The newly compiled log4net assembly should be referenced instead of the downloaded one (from i.e. nuget.org). There is no problem found in switch the targeting framework so far.

.Net 4.0 applications ^

What if a user's system is still based on .NET 4.0 and he/she still would like to use the logging data service? There is also no problem neither. What is needed to be done is to re-target the "Shared" and "Proxy" projects for the data service to .NET 4.0 and then delete (turn off) the "SUPPORT_ASYNC" conditional compilation flag. Next, he/she should reference the changed projects from his/her main projects, like what is done here.

Instrumentation ^

There are many tutorials on how to use log4net in a .NET application on the web, for example here is one on the CodeProject. The demo application included in the present article uses a configuration file, named Web.log4net, for log4net that is outside of the Web.config file. The following line should be added to the AssemblyInfo.cs file:

[assembly: log4net.Config.XmlConfigurator(ConfigFile = "Web.log4net", Watch = true)]

To use the current log4net appender, one must initialize it first.

  • First, make a reference to the appender project from the web application, then add the following lines to the Global.asax.cs file:

    Add namespace reference:

    using log4net;
    using Archymeta.Web.Logging;

    Add ASP.NET parameter provider:

    protected void Application_Start()
    {
        log4net.GlobalContext.Properties["user"] = new HttpContextUserNameProvider();
        log4net.GlobalContext.Properties["pageUrl"] = new HttpRequestUrlProvider();
        log4net.GlobalContext.Properties["referUrl"] = new HttpReferringUrlProvider();
        log4net.GlobalContext.Properties["requestId"] = new HttpRequestTraceIDProvider();
        ... other initialization steps
    }
  • Initialize calls to the logging data service. Add the following lines after initializing the CallContext for the logging data service

    DataServiceAppender.App = App;
    DataServiceAppender.ClientContext = ClientContext.CreateCopy();
    

    inside of the Startup.Auth.cs file (see here). Note: if the logging data service is different from the main data service of the application, then App and ClientContext variables must be initialized separately, for the logging data service using a different name from the main data service.

  • Tracking a user. In some analysis or view scenarios, the activity of a single user needs to be separated out from the logging records. If this is the case, the log item must be tagged with user identification information. The HttpContextUserNameProvider which is

    public class HttpContextUserNameProvider
    {
        public override string ToString()
        {
            HttpContext c = HttpContext.Current;
            if (c != null)
            {
                if (c.User != null && c.User.Identity.IsAuthenticated)
                    return c.User.Identity.Name;
                else
                    return c.Request != null && c.Request.AnonymousID != null ? 
                           c.Request.AnonymousID : "Request from Unknown Users";
            }
            else
            {
                if (Thread.CurrentPrincipal.Identity.IsAuthenticated)
                    return Thread.CurrentPrincipal.Identity.Name;
                else
                    return "Request from Unknown Users";
            }
        }
    }

    is used for that purposes. User identification information can be easily extracted from authenticated users, as it is shown above. However, this is not always possible for unauthenticated ones. There is an cookie based user anonymous ID for ASP.NET applications that can be used to identify a user, at least for certain amount of time span. However, such information is not yet available for more recent additions to the ASP.NET framework, like in the Web API channel or in the SignalR channel, etc.

    For traditional ASP.NET application, user anonymous ID is not enabled by default. One must add the following node

    <anonymousIdentification 
              enabled="true" 
              cookieless="UseCookies" 
              cookieName=".ASPXANONYMOUS" 
              cookieTimeout="30" 
              cookiePath="/" 
              cookieRequireSSL="false" 
              cookieSlidingExpiration="true" 
              cookieProtection="All" 
    />
    

    under the <system.web> node of the Web.config file. Of course most of the parameters provided should be modified to suit the needs of a particular application. The anonymous ID of an unauthenticated user is also not generated automatically. The following handler should be added to the Global.asax.cs  file:

    public void AnonymousIdentification_Creating(Object sender, AnonymousIdentificationEventArgs e)
    {
        e.AnonymousID = Guid.NewGuid().ToString();
    }

    It is invoked when either an unauthenticated user's anonymous ID is not available or is expired. After these settings the Request.AnonymousID will be available to every request of a unauthenticated user.

  • Tracking a request. As it is shown in the above snapshot of a log file, one user can make multiple requests in parallel that generates interleaving log records having the same user identification information within certain period of time. The user identification information alone is not sufficient for a more detailed analysis. One must find a way to track individual request during its life cycle. For a traditional ASP.NET request, this can be done by inserting the following as the first line of the controller method corresponding to the request

    public async Task<ActionResult> SomeMethod(...)
    {
        HttpContext.Items["RequestTraceID"] = Guid.NewGuid().ToString();
        ...
    }

    which will be picked up by the HttpRequestTraceIDProvider class:

    public class HttpRequestTraceIDProvider
    {
        public override string ToString()
        {
            HttpContext c = HttpContext.Current;
            if (c != null && c.Request != null)
                return c.Items["RequestTraceID"] == null ? null : c.Items["RequestTraceID"].ToString();
            else
                return null;
        }
    }

    However, the HttpContext.Current is available only to regular ASP.NET requests, therefore its Items property will not be there in any other kinds of requests, like Web API ones or SignalR ones. In case where there is no such a mechanism of passing request bounded parameters, it has to be done explicitly. One can use the TracedLogMessage class to wrap the logging message and the request identifier to be passed to the appender in the following way

    log.Debug(new TracedLogMessage { ID = "...", Msg = "..." });

    where the value of the ID property will be used as the request tracing identifier sent to the logging data service. However, the value "..." passed to the TracedLogMessage constructor needs to be passed from the start of the request down to every method involved during the request life cycle in which logging is made.

  • Uncaught exceptions. It can be logged by adding the following handler inside of the Global.asax.cs file:

    protected void Application_Error(object sender, EventArgs e)
    {
        HttpException hex = Server.GetLastError() as HttpException;
        if (hex.InnerException != null)
            log.Error("Unhandled exception thrown", hex.InnerException);
    }

As it is mentioned above, for a multi-user ASP.NET application, the log records contain interleaving ones from multiple threads, processes and users. The above mentioned tagging means could help a reader in doing just that. Of course a reader could invent his/her own tagging means that suit his/her application.

Configuration ^

The configuration for log4net is located inside of the Web.log4net file, as it is specified in the AssemblyInfo.cs file described above. The configuration for the current appender is provided as an example:

<appender 
      name="DataServiceLogAppender" 
      type="Archymeta.Web.Logging.DataServiceAppender, Log4NetServiceAppender">
  <maxQueueLength value="5000" />
  <maxUpdateBlockSize value="10" />
  <recordStackFrames value="true" />
  <userStackFramesOnly value="true" />
  <maxStackFramesUp value="10" />
  <lossy value="false" />
  <!-- parameters for the appender as a client of the log data service -->
  <loggerServiceUrl value="" />
  <maxReceivedMessageSize value="65536000" />
  <maxBufferPoolSize value="65536000" />
  <maxBufferSize value="65536000" />
  <maxArrayLength value="104857600" />
  <maxBytesPerRead value="4096" />
  <maxDepth value="64" />
  <maxNameTableCharCount value="16384" />
  <maxStringContentLength value="181920" />
</appender>

The following provides an explanation of various properties involved.

  • maxQueueLength: The maximum number of waiting log items to be sent to the data service. The default is 5000. The response of the system to a new log item when number of waiting log items has reached this value depends on the value of lossy. When lossy is true, then the new item will be discarded otherwise the addition statement will block until the number of waiting log items drops below this value.

  • maxUpdateBlockSize: The maximum number of log items to accumulate locally before they are been sent to the data service in one block. The default is 10. If the appender is not busy, it will send what is left in the local event block periodically. The client code does not need to "flush" them.

  • recordStackFrames: Whether or not the record method invoking stack frames at the log position. The default is true.

  • userStackFramesOnly: Whether or not to include stack frames whose code file is known. This is what "User" mean here. The default is true. If the corresponding .pdb file of an assembly is not available at the deployment site, then that assembly and the stack frames that reference it are not a "User" one.

  • maxStackFramesUp: Maximum number of stack frames to include. The default is -1, which means all. Modern frameworks could have quite deep call stacks even for simple user side calls. These stackframe information makes very limited sense to debugging user side codes. One can use this to put a limit on it if necessary.

  • lossy: Whether or not to drop a new log item when the waiting log items has reached maxQueueLength. The default is false.

  • loggerServiceUrl: An optional base url of an independent log data service. Specify it only when the targeting data service is different from the main data service of the application. If not specified, the main data service of the application is assumed. When it is set to am valid value, the following additional properties for the appender is used to setup the application as a client of the logging data service.

    The following is for the WCF client binding:

    • maxReceivedMessageSize: The maximum size, in bytes, for a message that can be received on any service invocation channel involved.
    • maxBufferPoolSize: The maximum amount of memory, in bytes, that is allocated for use by the manager of the message buffers that receive messages from the any service invocation channel involved.
    • maxBufferSize: The maximum amount of memory, in bytes, that is allocated for use by the manager of the message buffers that receive messages from any service invocation channel involved.

    The following is for the readerQuotas corresponding to the binding:

    • maxArrayLength: The maximum size, in bytes, for a buffer that receives messages from any service invocation channel involved.
    • maxBytesPerRead: The maximum allowed bytes returned for each read.
    • maxDepth: The maximum nested node depth.
    • maxNameTableCharCount: The maximum characters allowed in a table name.
    • maxStringContentLength: The maximum string length returned by the reader.

    These properties should be set to be consistent with the configuration parameters of the logging data service.

There is no <layout> ... </layout> child node for this appender since it does not involve any "layout" projection of the data.

The Demo ^

The included package contains a demo web application that is set up for using the log system.

Log positions ^

The start up of the web application is logged. This is done inside of the Startup.cs file in the root directory of the application:

public void Configuration(IAppBuilder app)
{
    ConfigureAuth(app);
    log.Info("System start up");
}

The log-in and log-off activities are also logged, this is done inside of the ArchymetaMembershipStores.dll assembly under the Libraries of the solution. The source code for it is not included in this article for simplicity purposes. It can be found in this article (most like it has no logging inserted when your are reading the present article, but it is easy to do it your self).

The following error handler is inserted into the Global.asax.cs file

protected void Application_Error(object sender, EventArgs e)
{
    var ex = Server.GetLastError() as Exception;
    if (!(ex is HttpException))
        log.Error("Unhandled exception thrown.", ex);
}

to log un-handled exceptions. The following line is inserted to the About method of the HomeController class

 throw new InvalidOperationException("Invalid operation!!!!!!!!!");

to generate an un-handled exception to be recorded by the logger. This exception will be thrown whenever the About page is visited, but this is intentional, it's not a bug!

The following application end handler is inserted into the Global.asax.cs file

protected void Application_End(object sender, EventArgs e)
{
    log.Info("The web application stopped.");
}

to record application end events.

Viewing log entries ^

After running the demo site the log data is recorded into the data service. The question of a reader might be: how do I view the log entries?

The answer to this question would be: it depends upon how you want the view them in this two stage logging solution. For a quick view, one can use the build-in query page of the data service. Visit the data service using a browser and try to find the EventLogs data set. After specifying the sorting conditions on the top, a list of log event will be displayed, like what is shown in the following.

Figure: Log entity raw data query page provided by the data service.

The sets of entities that depend upon a particular entity on the querying page are listed at the bottom left corner of the page after the said entity is selected. In the present case, only certain (one in fact) entities in the EventLocations data set depend on an entity in the EventLogs data set. The pop-up page for the dependent sub-set can be opened by clicking the corresponding name, "EventLocations" in the present case. Each entity displayed in the pop-up page may also has its own dependent entity sets, which can be displayed using the same method recursively. In our current case, each entity in the EventLocations data set has a sub-set of entities in the EventStackFrames data set that is depending on it, they can be displayed inside of a nested pop-up page by clicking the "EventStackFrames" button there (namely inside of the pop-up page for EventLocations).

This raw data viewing process may not suit everybody's needs since they are produced in a generic fashion. If a user would like to have more customized view of the log entries, including how they should be layout, he/she should design and develop his/her own layout and query applications for the log entries, following the instruction given here.

Monitoring the data source activities ^

This is slightly off the topic of client (of data service) side "logging", but since it in itself is a kind of service side "logging" that could constitute a useful debug mean it could be useful to know if one tries to develop software using the data service since it can be used to display a sequence of data modification operations performed inside of the data service corresponding to an operation on the client side, like inserting a log entity graphs, user login/logoff, etc.

The data changing activities, including the ones changing the EventLogs data set, inside of the data source can be monitored on a web page for the data service, like what is shown in the following snapshot:

Figure: Data source wide change monitoring page.

The data service does not turn such a monitor on by default. One has to set the following item under the <appSettings> node inside of the Web.config for the data service to true value before using the monitor, namely

<add key="DataSourceMonitoring" value="true" />

The monitor page has a relative path (url) of <data source name>/Monitoring, where <data source name> is the name of the data source, which in the present case is AspNetMember. it can be reached from the Main page for the data source by clicking the Monitoring link on top. The Main page is reached from the Home page by clicking the Data Source link.

Beyond the Demo ^

What is shown here seems to be simple, however it only scratches the surface of a more capable system. The solution provides third parties a rich set of APIs that can be used to satisfy their logging and/or other event driven application needs: from simple visual layout, to intelligent query and analysis and even to provide a basis for certain real-time system capable of complex event processing (CEP).

History ^

  • 2014-06-08. Article Version 1.0. Initial submission.

License

This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below.

A list of licenses authors might use can be found here