When I left off last time, we had created a decent little Rule engine that allowed us to create static rules on our domain objects. I had discussed trying to move these rules out to the database (or some other persistence medium), since this would allow these rules to be modified more easily. I want to be able to keep the fidelity of the rule system that we have now, but still enable them to be stored inside of a database. I don’t want a huge complex system of comparison types, expressions, placeholders, regular expressions, etc… that I have seen plaguing other systems which attempt this.
You can probably see where I am going with this, in order to accomplish this it would require us to push code out to the database. Storing the same format that the rules are executed within the application seems like a decent solution. And in fact, it really isn’t *that* difficult to do. But let me first say that there are issues with this solution, so read all the way through before you run out and start implementing this.
First I defined a class called DatabaseRuleFactory which will just be my front end for retrieving my rules from the database. It has one method that looks like this:
public IEnumerable<IRule<T>> FindRules<T>(RuleType ruleType)
This method is also a generic method, and we use the type and rule type to pull the rule data out of the database. In my case I’m not really hitting a database, I am just using a dictionary as a fake datastore. I created a DTO that looks pretty much like the StaticRule class that I defined before, only this time the rule is stored in a string. It looks like this:
internal class RuleData { private readonly string[] property; private readonly string errorMessage; private readonly string rule; public RuleData(string rule, string errorMessage, params string[] property) { this.property = property; this.errorMessage = errorMessage; this.rule = rule; } public string Rule { get { return rule; } } public string ErrorMessage { get { return errorMessage; } } public IEnumerable<string> Properties { get { return property; } } }
In this case our database would likely have four columns, an Id, the string representation of our rule, an error message, and lastly a comma separated list of properties that are affected by the rule. You could also have a separate table to hold them if you were a database purist, but in this case that is probably unnecessarily normalized. So, anyways, once we read the data from our db we now have to compile the source for our rule. So what do these rules look like? A rule in the database would be stored in a manner that would look like “e.SomeProperty > 0 && e.SomeOtherProperty > 10”. They would simply be short snippets of C# code that would represent a particular rule and so we are going to inject them into some code that they will compile inside of.
First we are going to load our entire list of RuleData instances for our particular type and rule type. Our RuleType enum just looks like this;
public enum RuleType { Persistence = 1, Business = 2 }
Once we have our type and RuleType then currently we are just forming a key by concatenating them together and then pulling the rules out of the Dictionary.
string rulesKey = type.FullName + ruleType; IList<RuleData> ruleData = rules[rulesKey];
You will notice that we are calling “type.FullName”, we just got this by using the generic argument to our method. We will use this extensively, and it is acquired like this:
Type type = typeof (T);
Now that we have the rule data we need to get our compiler environment setup. For those of you that do not know, the .net framework ships with the VB.net and C# compilers. So, anywhere that your app can run, you can access the compiler for your particular language. Here is the chunk of code that will initially setup our compiler:
var provider = new CSharpCodeProvider(); var cp = new CompilerParameters(); cp.GenerateExecutable = false; cp.GenerateInMemory = true; cp.ReferencedAssemblies.Add("system.dll"); cp.ReferencedAssemblies.Add(Assembly.GetExecutingAssembly().ManifestModule.Name); cp.ReferencedAssemblies.Add(type.Assembly.ManifestModule.Name);
So, what we are doing here is first getting our CSharpCodeProvider which is what we use to compile. Then we start setting our compiler parameters. We don’t want an exe, and we want it in memory (as opposed to writing out to disk), and then we are referencing some assemblies. The first assembly the is referenced is “system.dll” so that we have access to a good portion of the System namespace. You would need to explicitly reference other dlls that you may need to access in your rules. Then you will notice that we are referencing the current assembly since that is where my Entity base type is defined in. You would need to reference whatever dll has this class in it. Next we are referencing the dll that the type is defined in, because obviously we need access to this.
Now that we have our compiler setup we need to define the code that we are doing to compile with it! I’m going to break it down since it is a bit uglier than I would like. First we are going to define the namespace and classname that we are going to use in our assembly. We will then define our imports (in C# these are the “using” statements in our files). The code for this is:
string className = type.FullName.Replace('.', '_') + "Rules"; string unitNamespace = "BusinessRules"; var businessRulesNamespace = new CodeNamespace(unitNamespace); businessRulesNamespace.Imports.Add(new CodeNamespaceImport("System")); businessRulesNamespace.Imports.Add(new CodeNamespaceImport("System.Collections.Generic"));
Nothing complicated. We are using the types full name (which includes the namespace) and we are replacing the dot operator with underscores, since periods are not valid in class names. We don’t want to use the same class name, so we are just appending “Rules” to the end of the name. We are using the namespace “BusinessRules”, but you could use whatever you wanted. Next you can see how we are creating our namespaces and adding our imports.
Next we are going to define a CodeCompileUnit, which is just a chunk of code to build, and start adding stuff to it:
var unit = new CodeCompileUnit(); unit.Namespaces.Add(businessRulesNamespace); var businessRuleClass = new CodeTypeDeclaration(className); int i = 0; foreach (RuleData rule in ruleData) { i++; string source = @"public bool Execute{0}({1} e){{ return ({2}); }}"; source = String.Format(source, i, type.FullName, rule.Rule); var method = new CodeSnippetTypeMember(source); businessRuleClass.Members.Add(method); } businessRulesNamespace.Types.Add(businessRuleClass);
Here we declare our unit, add the namespaces to it, create a CodeTypeDeclaration, which is just what it sounds, a class definition. Then we loop through our rule data and start adding methods to our class for each rule. Each method will be called Execute1, Execute2, Execute3, etc… and will return our rule surrounded with parentheses. You can see that we are filling in the methods with String.Format. We are then creating our a CodeSnippetTypeMember which just compile a chunk of code into a type member and we are then adding it to our class that we defined. Finally we add our class into our namespace. Phew! That was ugly.
Next we use our provider to compile this code:
CompilerResults cr = provider.CompileAssemblyFromDom(cp, unit); foreach (CompilerError error in cr.Errors) { Console.WriteLine(error); }
Here I am just writing our the compile errors to the console, but you would obviously want to log them out somewhere since this can cause really bad problems. We get a result out of our call to “CompileAssemblyFromDom” called a CompilerResult. We use this to pull our type out of the assembly and get an instance of it:
Type compiledType = cr.CompiledAssembly.GetType(unitNamespace + "." + className); if (compiledType != null) { object myobj = Activator.CreateInstance(compiledType); i = 0; foreach (RuleData rule in ruleData) { i++; MethodInfo mi = compiledType.GetMethod("Execute" + i); Predicate<T> predicate = p => (bool) mi.Invoke(myobj, new object[] {p}); result.Add(new StaticRule<T>(predicate, rule.ErrorMessage, rule.Properties.ToArray())); } }
In the first line we are just accessing the assembly that we compiled and pulling our class out. Then we are using Activator.CreateInstance, which anyone who has done reflection has probably seen at one point or another. Once we have an instance of our class then we just loop through our rule data again, but this time we are getting each method off of our class instance and using that to construct a new StaticRule class with a Predicate that calls these methods. Got that straight? Ha ha. I split the Predicate out onto its own line, because before it was a bit harder to read, not that this is much better.
Lets try to make this a bit more clear. We have our instance that we just dynamically compiled and we created all of our Execute1, Execute2, etc… methods on. So we just loop back through our RuleData objects creating predicates that would look like this if we had prewritten classes:
p => DynamicClassInstance.Execute2(p)
Hopefully that is a bit more clear. If not, then it might be because of the lambda. If we wrote this as an anonymous method, then it would just look like this:
(T p) { return DynamicClassInstance.Execute2(p); }
Okay, so hopefully that is clear, if not, leave a comment. 🙂 Now that we have gone through all of this, we now have a bunch of “StaticRule” instances that have delegates which we can call to execute our rules. So, what is the problem here? Well, I’ll first just say that it has nothing to do with the fact that we keep having to compile this for every instance we create. We could easily put some caching of both the data coming out of the database, and the list of rules that we have created. This also isn’t all that slow, it compiles pretty fast and if you cache it, you should really only have to compile once each time that your rules change.
So what is the problem? Well, first of all we are creating a class per domain type, which means that we create and load a new dynamic assembly for every type that we have in our application. This could mean that we could have to create and load hundreds of assemblies in a large business application. This is a problem, but we could easily pull all of our rules for every type from the database at one time, put them all into a single class and compile it. So, even this can be worked around without a ton of effort.
Come on, what is the issue already?! Well, the real issue is that we are loading these assemblies into our AppDomain. So, if you have worked much with assemblies and AppDomains, you will know that once you load an assembly into an AppDomain, it can’t be unloaded. In order to unload an assembly you have to unload the entire AppDomain. But we are compiling and loading these assemblies into our applications main AppDomain. We can’t unload it. So, every time we have to recompile our rules we create and load a new assembly and the old one sticks around. It starts eating up more and more memory. I don’t know about you, but I don’t consider “recycle the app pool regularly” as a solution to a problem. Some people might.
So, the path to take is to load up a new AppDomain and load your assemblies into this new AppDomain. Then you have to use MarshallByRef types to communicate back and forth with the new AppDomain. The issue is that we need a delegate pointing to the type which is in the new AppDomain. So we would either need a type to proxy across the AppDomain barrier for every call to run a validation rule (no good due to serialization), or we would just pass a delegate back out of the new AppDomain, which would cause the other assembly to be loaded into our current AppDomain. Booo! So, we are a bit stuck. In my mind, we don’t really have a great option with this route. But I do have a few tricks up my sleeve.
What we need here is some way to get the rules out of the database, turn them into executable code, and execute them without having to compile them into their own assembly. Well, what does this sound like? Quite frankly, a scripting language! If only we had a scripting language that we could load from the database, pass in C# objects, run some code against it, and then get the result out. Wouldn’t that be nice? Well, your wish is granted, because we have a few options! We could use IronPython or IronRuby, and if you follow my blog then you know which one we are going to use. So, stick around until next time and we are going to implement our database rules, and use IronRuby as the execution engine.
Loved the article? Hated it? Didn’t even read it?
We’d love to hear from you.