Using Dapper and very judicious use of Entity Framework, any CRUD operation on any table can be generalized to a simple set of API endpoints in a single Entity controller. Furthermore, the backing table service implements the create/update/delete auditing very easily. This is a code once and forget set of controllers and services. An Authorization layer is provided to map permissions to user roles.
Much of the time, I do not need the complexities of an Object Relational Mapping (ORM) because I'm simply manipulating tables without business rules and if they have relationships, I expect the API caller to provide the correct foreign key IDs. Nor do I usually provide APIs to manipulate complex parent-child models.
Unfortunately, what I see all too often done in back-end programming is:
- Create a C# model that models the database table.
- Create CRUD endpoints for each table.
- Deserialize the JSON into the C# model.
- Use Entity Framework to insert the record into the database.
- Even more complicated:
- For records that are soft-deleted, read the record with EF, update the Deleted flag, then update the record using EF.
- For records being updated, read the record with EF, update all the properties (usually using a mapper tool), then update the record using EF.
- And lastly, if we're creating an audit record of create/update/delete operations, the programmer has to remember to call the audit service with the information to record the create/update/delete operation.
Ugh. Reams of code to basically do the same thing a thousand times. And tools like Swagger Codegen propagate the mess by generating the CRUD (no pun intended) back-end API services resulting in unique but very similar implementations for each table.
Using Dapper and very judicious use of Entity Framework, any CRUD operation on any table can be generalized to a simple set of API endpoints in a single Entity controller. Furthermore, the backing table service implements the create/update/delete auditing very easily. This is a code once and forget set of controllers and services.
I will discuss the implementation using:
Edit the appsettings.json for the correct database connection strings and plugin paths. The configuration in the download looks like this:
"UseDatabase": "DefaultConnection",
"ConnectionStrings": {
"DefaultConnection": "Server=localhost;Database=DMS;Integrated Security=True;",
"MasterConnection": "Server=localhost;Database=master;Integrated Security=True;"
},
"Plugins": [
{ "Path": "C:\\projects\\Personal\\PluginNetCoreDemo\\TableService
\\bin\\Debug\\netcoreapp3.1\\TableService.dll" },
{ "Path": "C:\\projects\\Personal\\PluginNetCoreDemo\\MigratorService
\\bin\\Debug\\netcoreapp3.1\\MigratorService.dll" },
{ "Path": "C:\\projects\\Personal\\PluginNetCoreDemo\\AuditService
\\bin\\Debug\\netcoreapp3.1\\AuditService.dll" },
{ "Path": "C:\\projects\\Personal\\PluginNetCoreDemo\\DatabaseService
\\bin\\Debug\\netcoreapp3.1\\DatabaseService.dll" }
]
This is most likely not correct for your local machine. The database "DMS" (this is related to another article that I haven't published yet) will be created for you.
The above diagram illustrates:
- We have four plugins:
- Audit Services
- Table Services and a controller
- Database Services
- Migrator Service and a controller
- The migrations in Migrations.dll
- Various package references
- A separate project for integration tests
The projects for must be organized as siblings to the web API application, otherwise Visual Studio gets its brains scrambled trying to figure out project references and dependencies:
Note that the service configuration does the basic setup of services except you don't see any of the services explicitly added as they are plugins.
public void ConfigureServices(IServiceCollection services)
{
services.AddControllers()
.AddNewtonsoftJson(options => options.SerializerSettings.Formatting = Formatting.Indented);
services.AddSwaggerGen(c =>
{
var xmlFile = $"{Assembly.GetExecutingAssembly().GetName().Name}.xml";
var xmlPath = Path.Combine(AppContext.BaseDirectory, xmlFile);
c.IncludeXmlComments(xmlPath);
});
var connection = Configuration.GetConnectionString(AppSettings.UseDatabase);
services.AddDbContext<IAppDbContext, AppDbContext>
(options => options.UseSqlServer(connection));
services
.AddAuthentication("tokenAuth")
.AddScheme<TokenAuthenticationSchemeOptions,
AuthenticationService>("tokenAuth", ops => { });
services.AddSingleton<IApplicationService, ApplicationService>();
services.LoadPlugins(Configuration);
}
The migrator controller and service is described in my article, A FluentMigrator Controller and Service for .NET Core - CodeProject, however there are some points to cover here because this project is configured to use IIS.
When Visual Studio builds the project, it will provision IIS. The app pool, in this is Demo AppPool
, must be given permissions in SQL Server under Security => Logins.
Note that the permissions above are probably too permissive.
You can run the migrations from the browser using: http://localhost/Demo/migrator/migrateup
and you should see:
Inspect FluentMigrator's VersionInfo
table with: http://localhost/Demo/migrator/versionInfo
.
You should see:
I have one migration which looks like this using FluentMigrator
's syntax:
using FluentMigrator;
namespace DMS.Migrations
{
[Migration(202201011201)]
public class _202201011201_CreateTables : Migration
{
public override void Up()
{
Create.Table("Test")
.WithColumn("ID").AsInt32().PrimaryKey().Identity().NotNullable()
.WithColumn("IntField").AsInt32().Nullable()
.WithColumn("StringField").AsString().Nullable()
.WithColumn("DateField").AsDate().Nullable()
.WithColumn("DateTimeField").AsDateTime().Nullable()
.WithColumn("TimeField").AsTime().Nullable()
.WithColumn("BitField").AsBoolean().Nullable()
.WithColumn("Deleted").AsBoolean().NotNullable();
Create.Table("Audit")
.WithColumn("ID").AsInt32().PrimaryKey().Identity().NotNullable()
.WithColumn("Entity").AsString().NotNullable()
.WithColumn("EntityId").AsInt32().NotNullable()
.WithColumn("RecordBefore").AsString(int.MaxValue).Nullable()
.WithColumn("RecordAfter").AsString(int.MaxValue).Nullable()
.WithColumn("Action").AsString().NotNullable()
.WithColumn("ActionBy").AsString().NotNullable()
.WithColumn("ActionDate").AsDateTime().NotNullable().WithDefault
(SystemMethods.CurrentDateTime)
.WithColumn("Deleted").AsBoolean().NotNullable();
}
public override void Down()
{
}
}
}
To help me organize migrations in a time-linear fashion, the best practice is to prepend the migration file with a date/time stamp in the format yyyymmDDhhmm
followed by a descriptive name for the migration.
The above migration creates two tables:
- A
Test
table - An
Audit
table
The Test
table is used for testing the Table Service and the Audit
table is for tracking create / update / delete changes to tables (entities.) We'll see how this all works when looking at the other services.
This service has no controller. I decided to create this service so that it can be extended at some point to do support transactional operations. The service is rather simplistic at the moment since none of the operations in the article require transactions. The purpose of this service is to return a SqlConnection
for use with Dapper. Note that mixing Entity Framework and Dapper calls in a transaction is not possible because they would be two separate connection instances.
using System.Data.SqlClient;
using Microsoft.Extensions.Configuration;
using Interfaces;
using Lib;
namespace Clifton.Services
{
public class DatabaseService : IDatabaseService
{
private readonly IConfiguration cfg;
public DatabaseService(IConfiguration cfg)
{
this.cfg = cfg;
}
public SqlConnection GetSqlConnection()
{
var cs = cfg.GetConnectionString(AppSettings.Settings.UseDatabase);
var conn = new SqlConnection(cs);
return conn;
}
}
}
The Audit Service implements only a service at the moment. It may be that at some point, we will have a controller to the operations performed on an entity, the changes to that entity, etc. For the moment, I just wanted to implement logging the action with the "before" and "after" records. I don't even log who made the change! Earlier, I stated that mixing Dapper and Entity Framework transactions in a single transactional connection is not possible, so here I'm basically violating my own rule. But honestly, I don't really care that much because if the Dapper transaction fails, the audit transaction will never be created, as we'll see later. In fact, the audit transaction could be performed asynchronously so as not to delay the response to the client.
This service uses Entity Framework because it's so much easier to insert records with EF than with Dapper, which would require the insert SQL. Therefore, we need a model:
using System;
using System.ComponentModel.DataAnnotations;
namespace Models
{
public class Audit
{
[Key]
public int ID { get; set; }
public string Entity { get; set; }
public int EntityId { get; set; }
public string RecordBefore { get; set; }
public string RecordAfter { get; set; }
public string Action { get; set; }
public string ActionBy { get; set; }
public DateTime ActionDate { get; set; }
public bool Deleted { get; set; }
}
}
For the moment, the "before" and "after" records are serialized to JSON. One can implement auditing in different ways - typically only the changes, or as I've done here, the entire before/after record state.
using System;
using Interfaces;
using Models;
using Record = System.Collections.Generic.IDictionary<string, object>;
namespace Clifton.Services
{
public class AuditService : IAuditService
{
private readonly IAppDbContext context;
public AuditService(IAppDbContext context)
{
this.context = context;
}
public void Insert(string entityName, int entityId, Record before,
Record after, string action)
{
var audit = new Audit()
{
Entity = entityName,
EntityId = entityId,
RecordBefore = before.Serialize(),
RecordAfter = after.Serialize(),
Action = action,
ActionBy = "",
ActionDate = DateTime.Now,
};
context.Audit.Add(audit);
context.SaveChanges();
}
}
}
Now we get to the meat of the matter - using Dapper to manipulate tables without the Entity Framework ORM. First, we have a controller for the CRUD operations on any table. I should point out two things right away:
- You probably don't want to support CRUD operations on every single table in the database! There are security issues to consider!
- As you will see, the table name is coded into the SQL so we have a SQL injection vulnerability! There is a workaround for this which is complicated but there are simpler solutions when addressing the first problem.
The controller implements the CRUD endpoints:
using Microsoft.AspNetCore.Mvc;
using Interfaces;
using Parameters = System.Collections.Generic.Dictionary<string, object>;
namespace Clifton.Controllers
{
[ApiController]
[Route("[controller]")]
public class EntityController : ControllerBase
{
private ITableService ts;
public EntityController(ITableService ts)
{
this.ts = ts;
}
[HttpGet("{entityName}")]
public ActionResult GetAll(string entityName)
{
var result = ts.GetAll(entityName);
return Ok(result);
}
[HttpGet("{entityName}/{entityId}")]
public ActionResult GetById(string entityName, int entityId)
{
var result = ts.GetById(entityName, entityId);
var ret = result == null ? (ActionResult)NotFound() : Ok(result);
return ret;
}
[HttpPost("{entityName}")]
public ActionResult Insert(string entityName, Parameters data)
{
var result = ts.Insert(entityName, data);
return Ok(result);
}
[HttpPatch("{entityName}/{entityId}")]
public ActionResult Update(string entityName, int entityId, Parameters data)
{
var result = ts.Update(entityName, entityId, data);
return Ok(result);
}
[HttpDelete("{entityName}/{entityId}")]
public ActionResult SoftDelete(string entityName, int entityId)
{
ts.SoftDelete(entityName, entityId);
return NoContent();
}
[HttpDelete("{entityName}/{entityId}/Hard")]
public ActionResult HardDelete(string entityName, int entityId)
{
ts.HardDelete(entityName, entityId);
return NoContent();
}
}
}
Dapper works by having the programmer write the SQL for the CRUD operations. So in most applications where you see Dapper used, you'll also see SQL hard-coded in the application. Yuck. We definitely don't want this, and we can't do this anyways because the Table Service works generically -- not C# generics, but the definition of the word "generically": "In a way that relates to a class or group of similar things; not specifically." Also, for queries, Dapper returns a DapperRow
or collection of DapperRow
instances, which is fine because it turns out that a DapperRow
is actually Dictionary<string, object>
instance so it's perfect for serializing the response to the client.
So first we have the "using
s", just to get them out of the way. It drives me nuts when people post code examples without stating what "using
" is necessary for the magic.
using System;
using System.Collections.Generic;
using System.Data.SqlClient;
using System.Linq;
using System.Text;
using Dapper;
using Interfaces;
using Lib;
using Record = System.Collections.Generic.IDictionary<string, object>;
using Records =
System.Collections.Generic.List<System.Collections.Generic.IDictionary<string, object>>;
using Parameters = System.Collections.Generic.Dictionary<string, object>;
I alias the Dictionary
class because it's annoying to have to type in List<Dictionary<string, object>>
or similar, and it makes the code more readable.
The public
methods mirror the controller's CRUD endpoints:
namespace Clifton.Services
{
public class TableService : ITableService
{
private readonly IDatabaseService dbSvc;
private readonly IAuditService auditSvc;
public TableService(IDatabaseService dbSvc, IAuditService auditSvc)
{
this.dbSvc = dbSvc;
this.auditSvc = auditSvc;
}
public Records GetAll(string tableName, Conditions where = null,
Joins joins = null, bool hasDeleted = true)
{
var ret = Query<Record>(tableName, null, QueryFnc, where, joins, hasDeleted).ToList();
return ret;
}
public List<T> GetAll<T>(string tableName,
Conditions where = null, Joins joins = null, bool hasDeleted = true) where T : new()
{
var ret = Query(tableName, null, QueryFnc<T>, where, joins, hasDeleted).ToList();
return ret;
}
public Record GetSingle(string tableName, int recordId, Joins joins = null)
{
var ret = Query<Record>(tableName, recordId, QueryFnc, null, joins).SingleOrDefault();
return ret;
}
public Record GetSingle(string tableName, Conditions where)
{
var ret = Query<Record>(tableName, null, QueryFnc, where).SingleOrDefault();
return ret;
}
public Record GetSingle(string tableName, Conditions where, Joins joins = null)
{
var ret = Query<Record>(tableName, null, QueryFnc, where, joins).SingleOrDefault();
return ret;
}
public Record GetById(string tableName, int entityId)
{
var where = Conditions.Where().Field(Constants.ID).Is(entityId);
var ret = Query<Record>(tableName, null, QueryFnc, where).SingleOrDefault();
return ret;
}
public Record Insert(string tableName, Parameters parms)
{
var ret = Insert(tableName, parms, QueryFnc).SingleOrDefault();
auditSvc.Insert(tableName, ret[Constants.ID].ToInt(),
null, ret, Constants.AUDIT_INSERT);
return ret;
}
public Record Update(string tableName, int entityId, Parameters parms)
{
var before = GetById(tableName, entityId);
var ret = Update(tableName, entityId, parms, QueryFnc).SingleOrDefault();
auditSvc.Insert(tableName, entityId, before, ret, Constants.AUDIT_UPDATE);
return ret;
}
public void SoftDelete(string tableName, int entityId)
{
var before = GetById(tableName, entityId);
var parms = new Parameters() { { "ID", entityId }, { Constants.DELETED, true } };
Update(tableName, entityId, parms, QueryFnc, asDelete: true).SingleOrDefault();
auditSvc.Insert(tableName, entityId, before, null, Constants.AUDIT_DELETE);
}
public void HardDelete(string tableName, int entityId)
{
var before = GetById(tableName, entityId);
using var conn = dbSvc.GetSqlConnection();
conn.Execute($"delete from {tableName}
where {Constants.ID} = @id", new { id = entityId });
auditSvc.Insert(tableName, entityId, before, null, Constants.AUDIT_DELETE);
}
...
Ignore the stuff about Joins
and Conditions
- this is beyond the scope of this article, but it should be obvious what var where = Conditions.Where().Field(Constants.ID).Is(entityId);
does. Suffice it to say that Conditions
lets the client specify complex search criteria on an entity, and Joins
lets the client specify complex joins across tables, which can also include Conditions
. If you do want to read more about it, you can read my article Adaptive Hierarchical Knowledge Management - Part II.
Anyways, the meat of the matter is in the private
functions.
First, we have two Query
functions, one that is general purpose, and the other that is generic if we have a backing model (which we don't, but I'm using a common code library with the AHKM (see link above) that I don't want to specialize just for this article.
private Records Query(string tableName, int? id, Func<SqlConnection,
(string sql, Parameters parms), Records> query, Conditions where = null,
Joins joins = null, bool hasDeleted = true)
{
using var conn = dbSvc.GetSqlConnection();
var qinfo = SqlSelectBuilder(tableName, id, where, joins, hasDeleted);
var ret = query(conn, qinfo).ToList();
return ret;
}
private List<T> Query<T>(string tableName, int? id, Func<SqlConnection,
(string sql, Parameters parms), IEnumerable<T>> query,
Conditions where = null, Joins joins = null, bool hasDeleted = true)
{
using var conn = dbSvc.GetSqlConnection();
var qinfo = SqlSelectBuilder(tableName, id, where, joins, hasDeleted);
var ret = query(conn, qinfo).ToList();
return ret;
}
Notice both call SqlSelectBuilder
:
private (string sql, Parameters parms) SqlSelectBuilder(string table, int? id,
Conditions where = null, Joins joins = null, bool hasDeleted = true)
{
var sb = GetCoreSelect(table, joins, hasDeleted);
var parms = new Parameters();
if (id != null)
{
sb.Append($" and {table}.{Constants.ID} = @{Constants.ID}");
parms.Add(Constants.ID, id.Value);
}
where?.AddConditions(sb, parms);
return (sb.ToString(), parms);
}
And it in turn calls GetCoreSelect
, which we'll see used later as well:
private StringBuilder GetCoreSelect(string table, Joins joins = null, bool hasDeleted = true)
{
var joinFields = joins?.GetJoinFields(",") ?? "";
var joinTables = joins?.GetJoins() ?? "";
var withDeleteCheck = hasDeleted ? $"where {table}.{Constants.DELETED} = 0" : "";
StringBuilder sb = new StringBuilder();
sb.Append($"select {table}.* {joinFields} from {table} {joinTables} {withDeleteCheck}");
return sb;
}
Inserting with Dapper looks like this:
private (string sql, Parameters parms) SqlInsertBuilder
(string table, Parameters parms, Joins joins = null)
{
if (parms.ContainsKey(Constants.ID))
{
parms.Remove(Constants.ID);
}
parms[Constants.DELETED] = false;
var cols = String.Join(", ", parms.Keys.Select(k => k));
var vals = String.Join(", ", parms.Keys.Select(k => $"@{k}"));
StringBuilder sb = new StringBuilder();
sb.Append($"insert into {table} ({cols}) values ({vals});");
var query = SqlInsertSelectBuilder(table, joins: joins).sql;
sb.Append(query);
return (sb.ToString(), parms);
}
There's a few things to note here. First, there's a call to SqlInsertSelectBuilder
, because we don't want to just insert the record, we want to get the inserted record back. This will give us the ID
as well as any default values or computed values that were coded as constraints in the database:
private (string sql, Parameters parms) SqlInsertSelectBuilder
(string table, Conditions where = null, Joins joins = null)
{
var sb = GetCoreSelect(table, joins);
sb.Append($" and {table}.{Constants.ID} in (SELECT CAST(SCOPE_IDENTITY() AS INT))");
var parms = new Parameters();
return (sb.ToString(), parms);
}
Also notice that any "ID
" parameter is removed -- the ID
is the primary key (according to our rules) and we don't want Dapper inserting it as it's an auto-increment field.
Notice also that the Deleted
parameter, set to false
, is added for us. I really dislike nullable fields unless the field is truly optional, and Deleted
is not in my opinion optional.
Lastly, since parameters are passed in as a dictionary (a JSON object from the client deserializes easily into a dictionary -- see the controller), all we need to do is map the name-value pairs as parameters and create the SQL statement as a join of those parameters.
Update is a similar process to insert:
private (string sql, Parameters parms)
SqlUpdateBuilder(string table, int id, Parameters parms, bool asDelete = false)
{
parms.Remove(Constants.ID);
if (!asDelete)
{
parms.Remove(Constants.DELETED);
}
var setters = String.Join(", ", parms.Keys.Select(k => $"{k}=@{k}"));
StringBuilder sb = new StringBuilder();
sb.Append($"update {table} set {setters} where {Constants.ID} = @{Constants.ID};");
var query = SqlSelectBuilder(table, id).sql;
sb.Append(query);
parms[Constants.ID] = id;
return (sb.ToString(), parms);
}
Here, there's a funky optional parameter that says, "yes, I really want to update the Deleted
field" which if you were paying attention and reading all the code carefully, you saw here:
public void SoftDelete(string tableName, int entityId)
{
var before = GetById(tableName, entityId);
var parms = new Parameters() { { "ID", entityId }, { Constants.DELETED, true } };
Update(tableName, entityId, parms, QueryFnc, asDelete: true).SingleOrDefault();
auditSvc.Insert(tableName, entityId, before, null, Constants.AUDIT_DELETE);
}
Oh, look at that asDelete: true
parameter!
In all the public
CRUD methods, you'll see something like this:
var ret = Query(tableName, null, QueryFnc, where).SingleOrDefault();
where this magical QueryFnc
is part of the parameters passed in to the private
CRUD operations.
And you'll see it called similar to this:
var ret = query(conn, qinfo).ToList();
What's that all about? Well, I really disliked writing the same code over and over, so I have two "query" functions that do the work:
private Records QueryFnc(SqlConnection conn, (string sql, Parameters parms) qinfo)
{
try
{
var records = conn.Query(qinfo.sql, qinfo.parms).Cast<Record>().ToList();
return records;
}
catch (Exception ex)
{
throw new Exception($"SQL Exception:{ex.Message}\r\n{qinfo.sql}");
}
}
private List<T> QueryFnc<T>(SqlConnection conn, (string sql, Parameters parms) qinfo)
{
try
{
var records = conn.Query<T>(qinfo.sql, qinfo.parms).ToList();
return records;
}
catch (Exception ex)
{
throw new Exception($"SQL Exception:{ex.Message}\r\n{qinfo.sql}");
}
}
The point of the QueryFunc
is to wrap the actual SQL call in a try-catch
so I can return a more intelligent error, including the SQL, and I really didn't want to "pollute" the various method with try
-catch
blocks everywhere!
Note that these methods always return a collection, which in all cases except GetAll
, results in just a single record. One function is for returning general Record
objects, the other if we have a backing model.
So does all this work?
Why, yes, it does!
Let's look at a basic integration test. Here are the "using
s" to get them out of the way:
using System;
using System.Collections.Generic;
using Microsoft.VisualStudio.TestTools.UnitTesting;
using FluentAssertions;
using Clifton.IntegrationTestWorkflowEngine;
using IntegrationTests.Models;
using WorkflowTestMethods;
And here are the "create entity" integration test, using FluentAssertions, which is great because if something fails, it will tell you both what the value was and what the value "should be":
namespace IntegrationTests
{
[TestClass]
public class EntityCrudTests : Setup
{
public static Test testData = new Test()
{
IntField = 1,
StringField = "test",
DateField = DateTime.Parse("8/19/1962"),
DateTimeField = DateTime.Parse("3/21/1991 7:47 pm"),
TimeField = DateTime.Parse("12:05 am"),
BitField = true
};
[TestMethod]
public void CreateEntityTest()
{
ClearAllTables();
var wp = new WorkflowPacket(URL)
.Post<Test>("entity/test", testData)
.AndOk()
.IShouldSee<Test>(t => t.ID.Should().NotBe(0));
}
...
I've described how this process works in my article: Fluent Web API Integration Testing. The interesting part is the base Setup
class:
using System.Data.SqlClient;
using Dapper;
namespace IntegrationTests
{
public class Setup
{
protected string URL = "http://localhost/demo";
private string connectionString = "Server=localhost;Database=DMS;Integrated Security=True;";
public void ClearAllTables()
{
using (var conn = new SqlConnection(connectionString))
{
conn.Execute("delete from Test");
conn.Execute("delete from Audit");
}
}
}
}
You'll want to change the URL and connection string for your local system.
I'm not going to put the code here for all 11 integration tests, we'll just look at one of the more interesting ones which tests that a hard delete is recorded in the audit table:
[TestMethod]
public void HardDeleteEntityTest()
{
int id = -1;
ClearAllTables();
var wp = new WorkflowPacket(URL)
.Post<Test>("entity/test", testData)
.AndOk()
.Then(wp => wp.IGet<Test>(t => id = t.ID))
.Delete($"entity/test/{id}/Hard")
.AndNoContent()
.Get<List<Audit>>("entity/audit")
.IShouldSee<List<Audit>>(r => r.Count.Should().Be(2))
.IShouldSee<List<Audit>>(r => r.OrderBy
(q => q.ID).First().Action.Should().Be(Constants.AUDIT_INSERT))
.IShouldSee<List<Audit>>(r => r.OrderBy
(q => q.ID).Skip(1).First().Action.Should().Be(Constants.AUDIT_DELETE));
}
What we see here is:
- Create the entity
- Delete the entity
- Get the audit table (which was cleared out)
- We should have both an "
INSERT
" and a "DELETE
" action in the audit table.
And indeed we do, as the test passes, and we can see this in the audit table in SSMS:
select Action, Entity, EntityId from Audit order by ID desc
Shows:
There are two major problems:
- The SQL injection attack possibility because the entity name is embedded in the generated SQL
- Anyone can access any entity with full CRUD capability
These two problems are solved by adding a user role permission "system."
First, we need an account service, as described in my article A Simple Web API Account Manager in .NET 6. Just read that article, as it describes adding user authentication.
Adding permissions involves adding some tables to manage user role permissions which falls under the category of authorization. Here's where it gets a bit whacky. Because of the general nature of this implementation, a user's permissions must be associated with the entity or entities that they are allowed to access. Instead of creating authorization rules on specific services or endpoints, the authorization is associated directly with the entity. This moves the authorization completely outside of the purview of attributes on controllers or endpoints and into configuration tables. Note that the implementation here does not go into the issues of ownership - that is a separate issue which I will discuss in a forthcoming article on implementing a document management system.
Authorization requires these tables:
Role
UserRole
Entity
EntityRole
With the idea being that the CRUD "grants" for a user's role must match the "grants" for that role on one or more entities, otherwise the user does not have permission for the CRUD operation.
Here are the migrations:
[Migration(202201011202)]
public class _202201011202_PermissionsAndRoles : Migration
{
public override void Up()
{
Create.Table("Role")
.WithColumn("Id").AsInt32().PrimaryKey().Identity().NotNullable()
.WithColumn("Name").AsString().NotNullable()
.WithColumn("CanCreate").AsBoolean().NotNullable()
.WithColumn("CanRead").AsBoolean().NotNullable()
.WithColumn("CanUpdate").AsBoolean().NotNullable()
.WithColumn("CanDelete").AsBoolean().NotNullable()
.WithColumn("Deleted").AsBoolean().NotNullable();
Create.Table("UserRole")
.WithColumn("Id").AsInt32().PrimaryKey().Identity().NotNullable()
.WithColumn("RoleId").AsInt32().NotNullable().ForeignKey("Role", "ID")
.WithColumn("UserId").AsInt32().NotNullable().ForeignKey("User", "ID")
.WithColumn("Deleted").AsBoolean().NotNullable();
Create.Table("Entity")
.WithColumn("Id").AsInt32().PrimaryKey().Identity().NotNullable()
.WithColumn("TableName").AsString().NotNullable()
.WithColumn("Deleted").AsBoolean().NotNullable();
Create.Table("EntityRole")
.WithColumn("Id").AsInt32().PrimaryKey().Identity().NotNullable()
.WithColumn("RoleId").AsInt32().NotNullable().ForeignKey("Role", "ID")
.WithColumn("EntityId").AsInt32().NotNullable().ForeignKey("Entity", "ID")
.WithColumn("Deleted").AsBoolean().NotNullable();
}
A policy is added in startup.cs:
services
.AddAuthorization(options => options.AddPolicy
(Constants.ENTITY_AUTHORIZATION_SCHEME, policy =>
{
policy.RequireAuthenticatedUser();
policy.Requirements.Add(new UserHasEntityPermission());
}));
services.AddScoped<IAuthorizationHandler, EntityAuthenticationService>();
and the implementation of the service makes calls to the entity service to validate the entity and the user action on the entity:
protected override Task HandleRequirementAsync
(AuthorizationHandlerContext context, UserHasEntityPermission requirement)
{
var claims = context.User.Identity as ClaimsIdentity;
var token = claims.FindFirst("token").Value;
var method = claims.FindFirst("method").Value;
var path = claims.FindFirst("path").Value;
var authorized = false;
if (path.StartsWith("/entity/"))
{
var entityName = path.RightOf("/entity/").LeftOf("/");
var user = acctSvc.GetUser(token);
authorized = user.IsSysAdmin ? entityService.IsEntityValid(entityName) :
entityService.IsUserActionAuthorized(entityName, user.Id, method);
}
if (authorized)
{
context.Succeed(requirement);
}
return Task.CompletedTask;
}
Given that the SysAdmin has permissions for everything, all we want to do is validate that the entity is actually a table in the database:
public bool IsEntityValid(string entityName)
{
var recs = GetAll("TABLES", Conditions.Where().Field("TABLE_NAME").Is(entityName),
hasDeleted: false, schema: "INFORMATION_SCHEMA");
return recs.Any();
}
Conversely, the user should have permissions for whatever CRUD operation they want to perform on the table.
Using SqlKata, we can write the query in a fluent syntax rather than hard coding a SQL statement for Dapper. By the way, SqlKata uses Dapper behind the scenes. Now, a typical SqlKata query looks like this:
var query = db.Query("Role")
.Join("UserRole", "Role.Id", "UserRole.RoleId")
.Join("EntityRole", "Role.Id", "EntityRole.RoleId")
.Join("Entity", "Entity.Id", "EntityRole.EntityId")
.Where("Entity.TableName", entityName)
.Where("UserRole.UserId", userId);
I really loathe hardcoded strings and don't want to use constants, so I implemented some extension methods so I can write this instead:
var query = db.Query<Role>()
.Join<Role, UserRole>()
.Join<Role, EntityRole>()
.JoinChild<EntityRole, Entity>()
.Where<Entity>(nameof(Entity.TableName), entityName)
.Where<UserRole>(nameof(UserRole.UserId), userId);
which ironically requires some minimal model implementation:
public class Role { }
public class UserRole
{
public int UserId { get; set; }
}
public class EntityRole { }
public class Entity
{
public string TableName { get; set; }
}
Funny, eh?
The extension methods are described in my article, SqlKata Extension Methods.
Back to the point: the full implementation to determine if the user is authorized for a specific CRUD operation on an entity (table) looks like this:
public bool IsUserActionAuthorized(string entityName, int userId, string method)
{
var connection = dbSvc.GetSqlConnection();
var compiler = new SqlServerCompiler();
var db = new QueryFactory(connection, compiler);
var query = db.Query<Role>()
.Join<Role, UserRole>()
.Join<Role, EntityRole>()
.JoinChild<EntityRole, Entity>()
.Where<Entity>(nameof(Entity.TableName), entityName)
.Where<UserRole>(nameof(UserRole.UserId), userId);
var data = query.Get<Permissions>();
bool ok = method.MatchReturn(
(m => m == "GET", _ => data.Any(d => d.CanRead)),
(m => m == "POST", _ => data.Any(d => d.CanCreate)),
(m => m == "PATCH", _ => data.Any(d => d.CanUpdate)),
(m => m == "DELETE", _ => data.Any(d => d.CanDelete)),
(_ => true, _ => false));
return ok;
}
and yes, uses the model Permissions
:
public class Permissions
{
public bool CanCreate { get; set; }
public bool CanRead { get; set; }
public bool CanUpdate { get; set; }
public bool CanDelete { get; set; }
}
That Match
syntax is described in my article Stop Writing Switch and If-Else Statements!
Each of the endpoints in the EntityController
is now decorated with:
[Authorize(Policy = Constants.ENTITY_AUTHORIZATION_SCHEME)]
and .NET Core does the rest.
Now we can write some integration tests to verify that this code works as desired. First, I ended up writing a common extension method for creating a test user account with the permissions that I want to test:
public static WorkflowPacket CreateUserAndEntityRole(this WorkflowPacket wp, string entity,
string username, string password, string roleName, Permissions permissions)
{
int roleId = -1;
int entityId = -1;
int userId = -1;
wp
.Login()
.Post<User>("account", new { username, password })
.AndOk()
.IShouldSee<User>(u => u.Id.Should().NotBe(0))
.IGet<User>(u => userId = u.Id)
.Log($"User ID = {userId}")
.Post<Role>("entity/role", new
{
Name = roleName,
CanCreate = permissions.CanCreate,
CanRead = permissions.CanRead,
CanUpdate = permissions.CanUpdate,
CanDelete = permissions.CanDelete,
})
.AndOk()
.IShouldSee<Role>(r => r.Id.Should().NotBe(0))
.IGet<Role>(r => roleId = r.Id)
.Post<Entity>("entity/entity", new { TableName = entity })
.AndOk()
.IShouldSee<Entity>(e => e.Id.Should().NotBe(0))
.IGet<Entity>(e => entityId = e.Id)
.Post<UserRole>("entity/userrole", new { RoleId = roleId, UserId = userId })
.AndOk()
.IShouldSee<UserRole>(ur => ur.Id.Should().NotBe(0))
.Post<EntityRole>("entity/entityrole", new { RoleId = roleId, EntityId = entityId })
.AndOk()
.IShouldSee<EntityRole>(er => er.Id.Should().NotBe(0));
return wp;
}
Now I can create write the integration tests.
[TestMethod]
public void UserCanCreateEntityTest()
{
ClearAllTables();
var wp = new WorkflowPacket(URL)
.CreateUserAndEntityRole("Test", "Marc", "fizbin", "CreateEntityRole",
new Permissions() { CanCreate = true })
.Login("Marc", "fizbin")
.Post<Test>("entity/test", testData)
.AndOk()
.IShouldSee<Test>(t => t.ID.Should().NotBe(0));
}
This test verifies that a user, with Create permission on the "Test
" table, can indeed create records.
[TestMethod]
public void UserCannotCreateEntityTest()
{
ClearAllTables();
var wp = new WorkflowPacket(URL)
.CreateUserAndEntityRole("Test", "Marc", "fizbin", "CreateEntityRole",
new Permissions() { CanRead = true })
.Login("Marc", "fizbin")
.Post<Test>("entity/test", testData)
.AndForbidden();
}
Here, the user is given Read permissions, not Create permissions, and the integration test verifies that if the endpoint is called to create a record, the application returns with "Forbidden
."
[TestMethod]
public void UserCanReadEntityTest()
{
ClearAllTables();
var wp = new WorkflowPacket(URL)
.CreateUserAndEntityRole("Test", "Marc", "fizbin", "CreateEntityRole",
new Permissions() { CanRead = true })
.Post<Test>("entity/test", testData)
.Login("Marc", "fizbin")
.Get<List<Test>>("entity/test")
.AndOk()
.IShouldSee<List<Test>>(data => data.Count.Should().Be(1));
}
Here, a record is created by the SysAdmin who has full permissions, and the record can be read by the user who has Read permissions. As mentioned earlier, the idea of record ownership is beyond the scope of this article.
[TestMethod]
public void BadEntityTest()
{
ClearAllTables();
var wp = new WorkflowPacket(URL)
.CreateUserAndEntityRole("Test", "Marc", "fizbin", "CreateEntityRole",
new Permissions() { CanCreate = true })
.Login("Marc", "fizbin")
.Post<Test>("entity/test2", testData)
.AndForbidden();
}
Here, the user is trying to create a record in the entity "test2
" that doesn't exist.
[TestMethod]
public void SysAdminBadEntityTest()
{
ClearAllTables();
var wp = new WorkflowPacket(URL)
.CreateUserAndEntityRole("Test", "Marc", "fizbin", "CreateEntityRole",
new Permissions() { CanCreate = true })
.Post<Test>("entity/test2", testData)
.AndForbidden();
}
Here the SysAdmin is trying to create a record in the entity "test2
" that doesn't exist.
At this point, I've addressed the authorization elephant and actually as well the SQL injection elephant. Personally, I'm not that thrilled about the fact that the SQL injection still lingers from the perspective of internal use of the entity service and I feel that this issue really does need to be addressed better. I'm also leery of the system administration work required to give users the correct permissions on the desired entities. A nice front-end can mitigate. I do however like the granularity of controlling CRUD operations by user role. So there are a couple baby elephants to look at, but not now.
I think I have a decent suite of integration tests at this point to demonstrate that this technology works as intended.
I hope you find this useful for a non-ORM general purpose way of performing CRUD operations on tables, and I hope that this reduces the amount of per-table controllers, models (it seems these are called POCO's - plaint old CLR objects), services, and other code you end up writing or having auto-generated! I would be very amused if there's a way to coerce Entity Framework to work in a model-less way!
I also hope that I've demonstrated the various architectures and technologies listed below:
- Plug-in controllers and services
- Fluent integration testing
- Dapper
- SqlKata
- FluentMigrator
- FluentAssertions
Honestly, this is somewhat of an experiment and somewhat not. I intend to use this architecture for additional articles, particularly on a document management system I am in the midst of writing. That said, I doubt this is everyone's cup of tea. It's too strange and people and companies love ORMs. Introducing the techniques described in this article will probably raise the hackles of many other developers and architects. But I march to the beat of my own drum!
- 8th February, 2022: Initial version