Click here to Skip to main content
65,938 articles
CodeProject is changing. Read more.
Articles / database / MongoDB

Inside Headless CMS

4.83/5 (8 votes)
1 Mar 2019GPL39 min read 8.9K  
Tech overview about Headless CMS and implementation on ASP.NET Core

Introduction

In this article, we will learn about Headless CMS and we will understand what are its advantages and when it is convenient to employ. Moreover, we will enumerate the actual main limitation. To better understand how an HCMS works behind the scenes, I will explain how I design and build RawCMS, an Aspnet.Core Headless CMS with Oauth2, extension plugin system, business logic support. This solution is available on GitHub and released on docker hub as a demo.

Headless CMSs

What is a Headless CMS?

A traditional CMS combines content and rendering part, meanwhile, a headless CMS is focused only on the content. This may seem like a limitation because barely speaking, you lose something. The purpose of HCMS is to decouple logic from content making easy change management and decomposing complex application in many components, each one with its single responsibility.

Going in this direction, HCMS is something that can replace what actually you are calling backend and save a lot of useful work creating CRUD statements.

HCMS is born to create multi-component application where you can quickly change presentation logic and design, and this is a big improvement when you are working on modern websites or application and you need to restyle\change UI once at a year because of business requirement.

Many vendors sell their product and label it at “HCMS” just because it is decoupled (and because it sounds cool and may drive to a sell improvement). In my point of view, I’m strictly related to the original integralist definition: headless cms means an API first, non-monolithic CMS, completely decoupled from the interface or other components.

Advantages of a Headless CMS

Why use a headless CMS? I could simply say that in some scenario, it can be useful to decouple systems, make easier frontend replacement and speed up development phase, but I feel obliged to explain better using a bullet list.

  • Omnichannel readiness: The content created in a headless CMS is “pure” and you can consume in every context you want. If you store some news content on it, you can publish on public web site or intranet as well, keeping data entry in one place.
  • Low operating costs: Headless CMSs are products so, once you choose a good one, I expect it will be plug-and-play. Moreover, compared to custom solution, update and bug fixes came for free from the vendor.
  • Reduces time to market: A headless CMS promotes an agile way of working. You can involve multiple teams for backend and frontend and this reduces timing. Moreover, because the HCMS area is a vertical solution on data storage with API consumption, most of things come already done, so you must focus on data design rather then on technical details (like wasting time thinking about payloads, when you can have Odata or Grahql for free).
  • Vertical solution: HCMS does one thing. This makes it very easy to learn and maintain.
  • Flexibility: Once you pick your HCMS (on prem or cloud, no matter), your devs can use any language they prefer to implement frontend. This means you can be free to technology contraints.

Limitation of Headless CMS Solution

HCMS are quite young compared to traditional CMS so, even if a lot of products were born in the last years, most of the products are not so mature to completely replace a traditional API backend. In this pharagraph, I would share my experience about the limitation I found. Capabilities may vary a lot based on the specific product and if it is an on-prem or saas solution.

Actually, there are mainly two kinds of CMS headless limitations:

  • Disadvantages of using an HCMS
  • Limitation of the product you install

Disadvantages of Using an HCMS

HCMS requires to employ multiple teams to benefits of work parallelization. Moreover, as the HCMS doesn’t have any rendering, all presentation logic is demanded to the client. This is good for decoupling, but in all cases, you have only one consumer decoupling advantages is not so relevant and you introduce more complexity and latency over the data fetch process. Another problem is about business logic. Where to implement? If you do not want to implement into HCMS, you have to put it into presentation layer and with multiple consumers, you will duplicate it, falling in problems you have when logic is there in more than one place. Otherwise, trying to put it into HMS, you will discover that most cloud solution\products are not so flexible. This introduces the next topic, what are all the HCMS limitations?

Limitations of HCMS

Testing most important HCMS solution, I fall in many difficult situations and the following is the list of the most common limitations. Take into account that this depends on the product, someone may have it or not but generally speaking, most are quite common.

  • Authentication against external provider: Most solutions do not allow to authenticate users against an external system. I’m speaking about the most common scenario where you have an central authentication system and all parties pass user token\ticket to operate on behalf of the user. In other words, if I have a oauth2 server, I want to authenticate on the frontend and make calls with a token to all application of the intranet, not only the HCMS, and being recognised as myself.
  • Non standard output format: Some use graphql or Odata and this is good because it gives a standard approach to the data consumption. The problem is that “some” doesn't means “all”, so you have to take care about this choosing your HCMS.
  • Business logic: In most cases, there is no possibility to define at runtime the business logic, and in some cases neither extending the core application.
  • Extensibility: It’s hard to find a solution where you can write your own code and alter business logic or add extra stuff. This is in part because many vendor design their HCMS as dumb data storage and in part for the complexity of managing extensibility.

When and Where to Use a Headless CMS?

Headless CMS is a great opportunity but there, we have to understand what is the best scenario for employing it to optimize cost\benefit ratio. The matter is that, using regular HCMS, the customization is quite limited so in case you are not in the right scenario, it will be hard to blend the HCMS to achieve business requirement. Moreover, using it just like a bare data storage makes it meaningless.

When using a HCMS is convenient:

  • A lot of changes on UI during the time
  • A lot of applications that share the same information and one team that manages it
  • You have few business logic over data
  • You can employ multiple teams (be+fe)

When you should not use an HCMS:

  • There is a vertical solution that fits your need (e.g., you want a blog use wordpress)
  • You have a lot of business logic
  • You are not the master of data

RawCMS: Build Your Own Headless CMS

Image 1In this chapter, we will see what RawCMS is and how I created a headless CMS using ASP.NET Core, mongodb, Docker and some fantasy.

Why Another Headless CMS?

The intent of RawCMS is to generate an HCMS without the common limitation of HCMSs (...and the will of play something interesting to train on new technology ;-) )

RawCms Feature Selection

So the features we will put on it:

  • Possibility to authenticate over other auth system using oauth2 introspection (or a built-in auth system)
  • Possibility to add business logic using a hook/event system
  • Possibility to add custom endpoints to manage non-data-related event
  • Possibility to add features in a plugin system
  • Possibility to validate data
  • Expose data with multiple protocols, like webapi, GraphQL, Odata

Architecture

Basically, the architecture I would have realized is the following. Actually, the plugin part has some limitation, and workflow management is missing, but other parts are fully functional.

Image 2

Service Layer

The service layer is the core part of the system. Using general JObject mapped on mongodb entities, you can store whatever you want on mongo collections, all data is untyped.

Here is the most relevant part of the class, to explain how it works.

C#
public class CRUDService 
{
    public JObject Get(string collection, string id)
    {
        //Create filter by id (all entity MUST have an id field, called _id by convention)
        FilterDefinition<BsonDocument> filter = Builders<BsonDocument>.Filter.Eq
                                                ("_id", BsonObjectId.Create(id));

        IFindFluent<BsonDocument, BsonDocument> results = _mongoService
            .GetCollection<BsonDocument>(collection)
            .Find<BsonDocument>(filter);

        return ConvertBsonToJson(json);
    }

    public ItemList Query(string collection, DataQuery query)
    {
        FilterDefinition<BsonDocument> filter = FilterDefinition<BsonDocument>.Empty;
        if (query.RawQuery != null)
        {
            filter = new JsonFilterDefinition<BsonDocument>(query.RawQuery);
        }

        InvokeAlterQuery(collection, filter);

        IFindFluent<BsonDocument, BsonDocument> results = _mongoService
            .GetCollection<BsonDocument>(collection).Find<BsonDocument>(filter)
            .Skip((query.PageNumber - 1) * query.PageSize)
            .Limit(query.PageSize);

        long count = Count(collection, filter);        

        return new ConverToItemList(results, (int)count, query.PageNumber, query.PageSize);
    }

    public JObject Update(string collection, JObject item, bool replace)
    {
        //Invoke validation events
        InvokeValidation(item, collection);

        // create collection if not exists
        EnsureCollection(collection);

        FilterDefinition<BsonDocument> filter = Builders<BsonDocument>.Filter.Eq
        ("_id", BsonObjectId.Create(item["_id"].Value<string>()));

        //Invoke presave events
        InvokeProcess(collection, ref item, SavePipelineStage.PreSave);

        //insert id (mandatory)
        BsonDocument doc = BsonDocument.Parse(item.ToString());
        doc["_id"] = BsonObjectId.Create(item["_id"].Value<string>());

        //set into "incremental" update mode
        doc = new BsonDocument("$set", doc);        

        UpdateOptions o = new UpdateOptions()
        {
            IsUpsert = true,
            BypassDocumentValidation = true
        };

        if (replace)
        {
            _mongoService.GetCollection<BsonDocument>(collection).ReplaceOne(filter, doc, o);
        }
        else
        {
            BsonDocument dbset = new BsonDocument("$set", doc);
            _mongoService.GetCollection<BsonDocument>(collection).UpdateOne(filter, dbset, o);
        }
        //Post save events
        InvokeProcess(collection, ref item, SavePipelineStage.PostSave);
        return JObject.Parse(item.ToJson(js));
    }
}

Authentication

The authentication part is done adding identity server and using different configuration basing on RawCms settings. In this way, we can use internal identity server (others get tokens from us, we have the user data) or integrating with others (we get the token in the request header, we are able to thrust against other oauth system).

Here is the most relevant part of the code. This code is invoked during Authentication plugin startup and gets configuration from the database. All parts of code not related to authentication configuration of this class are omitted.

C#
public override void ConfigureServices(IServiceCollection services)
{
    base.ConfigureServices(services);

    //configuration came from constructor
    services.Configure<ConfigurationOptions>(configuration);

    services.AddSingleton<IUserStore<IdentityUser>>(x => { return userStore; });
    //... registering all identity server services for user and roles (all code omitted)
    services.AddSingleton<IUserClaimsPrincipalFactory<IdentityUser>, RawClaimsFactory>();

    // configure identity server with in-memory stores, keys, clients and scopes
    services.AddIdentityServer()
    .AddDeveloperSigningCredential()
    .AddInMemoryPersistedGrants()
    .AddInMemoryIdentityResources(config.GetIdentityResources())
    .AddInMemoryApiResources(config.GetApiResources())
    .AddInMemoryClients(config.GetClients())
    .AddAspNetIdentity<IdentityUser>()
    .AddProfileServiceCustom(userStore);

    if (config.Mode == OAuthMode.External)
    {
        OAuth2IntrospectionOptions options = new OAuth2IntrospectionOptions
        {
            //... set option basing on config (code omitted)            
        };

        options.Validate();

        services.AddAuthentication(OAuth2IntrospectionDefaults.AuthenticationScheme)
            .AddOAuth2Introspection(x =>
            {
                x = options;
            });
    }
    else
    {
        services.AddAuthentication(OAuth2IntrospectionDefaults.AuthenticationScheme)
         .AddIdentityServerAuthentication("Bearer", options =>
         {
             //... set option basing on config (code omitted)
         });
    }

    services.AddMvc(options =>
    {
        //this apply custom authentication like apitoken other than oauth standard
        options.Filters.Add(new RawAuthorizationAttribute(config.ApiKey, config.AdminApiKey));
    });
}

Lambdas

Lamba is a simple command pattern implementation, the name is inspired on the serverless model where you can expose function as rest endpoint. Based on this, you can tune everything in the system by implementing lamba. Each lambda instance is discovered at runtime and invoked based on lamba type and event, passing data context to it.

Some lambda examples are given below.

Adding Custom endpoint with lambda

C#
public class DummyRest : RestLambda
{
    public override string Name => "DummyRest";

    public override string Description => "I'm a dumb dummy request";

    public override JObject Rest(JObject input)
    {
        JObject result = new JObject()
      {
        { "input",input},
        { "now",DateTime.Now},
      };

        return result;
    }
}

Validate Data

C#
public class MyCustomValidation : SchemaValidationLambda
{
    public override string Name => "My custom Validation";

    public override string Description => "Provide  entity validation";

    public override List<Error> Validate(JObject input, string collection)
    {
        //do here all check with data
        return ImplementCheckHere(input, collection);
    }
}

Alter Data on Save

C#
public class AuditLambda : PreSaveLambda
{
    public override string Name => "AuditLambda";

    public override string Description => "Add audit settings";

    public override void Execute(string collection, ref JObject Item)
    {
        if (!Item.ContainsKey("_id") || string.IsNullOrEmpty(Item["_id"].ToString()))
        {
            Item["_createdon"] = DateTime.Now;
        }

        Item["_modifiedon"] = DateTime.Now;
    }
}

Plugins

The idea behind plugin system is to create a project, develop your features, throw the DLL into bin folder and make it available to the application. The main part of this will be discussed into a dedicated article because it is quite long to explain and off topic. I just want to show here the principle of plugin system. This also means that you may use nuget as delivery system or feature marketplace.

C#
public class GraphQLPlugin : RawCMS.Library.Core.Extension.Plugin
{
    public override string Name => "GraphQL";
    public override string Description => "Add GraphQL CMS capabilities";
    public override void Init()
    {
        Logger.LogInformation("GraphQL plugin loaded");
    }
    public override void ConfigureServices(IServiceCollection services)
    {
        //will be triggered on Startup.cs ConfigureServices
        base.ConfigureServices(services);

    }
    private void SetConfiguration(Plugin plugin, CRUDService crudService)
    {
        //used to receive configuration from system
    }
    public override void Configure(IApplicationBuilder app, AppEngine appEngine)
    {
        // will be triggered on Startup.cs Configure
        base.Configure(app, appEngine);
    }    
}

How to Work with RawCMS

To let user test this solution, I implemented many options.

Install from Docker

This is most convenient. You can find inside doc a docker compose example or you can just use docker run and then link to a mongodb instance.

docker run rawcms -p 80:8081

or using docker compose:

version: '3'
services:
  rawcms:
    build: .
    ports:
    - "54321:54321"   
    links:
    - mongo
    environment:
    - MongoSettings__ConnectionString=mongodb://mongo:27017/rawCms
    - PORT=54321
    - ASPNETCORE_ENVIRONMENT=Docker
  mongo:
    image: mongo

The environment variable MongoSettings__ConnectionString is used to pass connection string to the application.

Install from Zip Release

If you are not ready for containers, you can download zip file from GitHub releases and deploy it manually as a regular ASP.NET Core application.

Build Your Own

The third possibility is to fork the solution and play on your local. At the moment, there isn't any nuget package to include in your setup, so the best recommended solution is to add github repo as submodule or subtree.

Points of Interest

HMCS is a great opportunity for decoupling architecture and avoiding non-useful work. This may lead to benefits like reduced times and costs, making all parties independent. Of course, this is not the magic bullet, you have to understand if a vertical solution is more convenient or your business login avoid you use it.

I tried to implement an HCMS and we saw the a very important topic. It was funny and we understood how to achieve the most important topic to surpass actual technical limitation on HCMS.

In the next parts of the article, we will see deeper what was such limitation and the solution applied. Stay tuned!

License

This article, along with any associated source code and files, is licensed under The GNU General Public License (GPLv3)