As far as we have come with software construction there is still little or no axioms on which we can base design decisions in our software. But, at its heart, this comes from the fact that software construction is not a science. Software construction is to computer science like engineering is to materials chemistry. While this has been an issue that has been argued for as long as programming has existed, it has recently come up again between Phil Haack and Frans Bouma.
Phil Haack and Frans Bouma have had a little back and forth about a post that Phil Haack made titled "Writing testable code is about managing complexity". The basic synopsis (for those who have not read it) is that Phil was saying that someone had made a comment to him asking what other benefits besides better testability the MVC framework had, and Phil’s response was essentially "isn’t that enough?" Frans Bouma’s response was saying that the goal should be provability and correctness, then Phil fired back with this post entitled "What exactly are you trying to prove?" in which he argues that software systems are so complex that provability is an unattainable goal. The reality, obviously, is somewhere in the middle. (I highly recommend that you read all three posts since I made them sound about as interesting as a can of tuna. They are excellent posts, and both Phil and Frans are very smart people)
Frans (who writes a wonderful tool called LLBLGen Pro, which now that I think about it, I have no idea what LLBL stands for actually) is looking at the issue from an academic perspective, where one has to really dig deep down into a lot of complex seemingly intractable algorithms in order to come up with a solution. (And as you can tell from his shared items on Google reader, he is a fan of reading academic papers) In this world problems are complex but isolated. In the real world though, things are never this isolated. We strive to create as much isolation as possible, but the interactions quickly become very complex (I am all about reducing complexity though).
Phil on the other hand is coming from a world of designing and building business software. He is more of the engineer type who is building a bridge and then coming up with an acceptable model in which to test it. The world is an infinitely complex place, and it is impossible to test every variable (as the Tacoma Narrows Bridge, Citigroup Center, and the Charles De Gaulle Terminal have taught us), so instead we create test harnesses and we poke and prod and create more tests until we feel like we have covered every edge case. This is how engineers solve problems. They know that there are rules and guidelines that they have to stay within, and they know that if they test and model enough, then they can be reasonably sure that their designs will work.
Take Google’s pagerank algorithm as an example. The initial algorithm that Larry and Sergey (with others) designed and proved is actually fairly simple (the algorithm was simple, not the idea or the proof). But what happened when it was released into the wild? Well, people gamed the system, that is what happened. And so now Google certainly has thousands upon thousands of lines of code that they use to filter out spam, duplicates, etc… They had to come up with the "rel=’nofollow’" tag in order to try and stop some of it, and they are constantly tweaking their indexing engine to try and better spot all of these people that are trying beat them at their own game. So you essentially took a computer science problem that was provable and you released it to the world where it was turned into what is now a completely unprovable problem (not saying that the basic core algorithm is no longer provable, but that the system as a whole is probably now 99.9% unprovable code). So how do they manage this? Probably by testing the crap out of it. They probably run huge sets of data through a new algorithm and see what falls out, then they see what valid websites they incorrectly targeted and they start over. How else would you do it?
So, where am I going with this? Well, I have the utmost respect for Frans, but I am going to have to disagree with him when talking about most software that is produced. Provability is about specific algorithms and small parts of complex problems, but I think in the real world of software development we produce code that is 99.9% unprovable. Sure, parts may be, but for the most part they are not. We deal with a lot of gray areas and very little black and white. All we can do is test our code and then test it some more, then start all over and test it again. And with enough rigor we will cover most of our bases and will have a product that we can reasonably assume will work within our specs. We cannot say that our code is "correct" because being correct implies proof, and most of the time that is not a luxury that we have.
Loved the article? Hated it? Didn’t even read it?
We’d love to hear from you.