Advertisement
Advertisement


Self Testing Systems


Question

I had an idea I was mulling over with some colleagues. None of us knew whether or not it exists currently.

The Basic Premise is to have a system that has 100% uptime but can become more efficient dynamically.

Here is the scenario:

* So we hash out a system quickly to a specified set of interfaces, it has zero optimizations, yet we are confident that it is 100% stable though (dubious, but for the sake of this scenario please play along)

* We then profile the original classes, and start to program replacements for the bottlenecks.

* The original and the replacement are initiated simultaneously and synchronized.

* An original is allowed to run to completion: if a replacement hasnĀ“t completed it is vetoed by the system as a replacement for the original.

* A replacement must always return the same value as the original, for a specified number of times, and for a specific range of values, before it is adopted as a replacement for the original.

* If exception occurs after a replacement is adopted, the system automatically tries the same operation with a class which was superseded by it.


Have you seen a similar concept in practise? Critique Please ...

Below are comments written after the initial question in regards to posts:

* The system demonstrates a Darwinian approach to system evolution.

* The original and replacement would run in parallel not in series.

* Race-conditions are an inherent issue to multi-threaded apps and I acknowledge them.

2008/09/13
1
3
9/13/2008 2:48:02 PM

Accepted Answer

I believe this idea to be an interesting theoretical debate, but not very practical for the following reasons:

  1. To make sure the new version of the code works well, you need to have superb automatic tests, which is a goal that is very hard to achieve and one that many companies fail to develop. You can only go on with implementing the system after such automatic tests are in place.
  2. The whole point of this system is performance tuning, that is - a specific version of the code is replaced by a version that supersedes it in performance. For most applications today, performance is of minor importance. Meaning, the overall performance of most applications is adequate - just think about it, you probably rarely find yourself complaining that "this application is excruciatingly slow", instead you usually find yourself complaining on the lack of specific feature, stability issues, UI issues etc. Even when you do complain about slowness, it's usually an overall slowness of your system and not just a specific applications (there are exceptions, of course).
  3. For applications or modules where performance is a big issue, the way to improve them is usually to identify the bottlenecks, write a new version and test is independently of the system first, using some kind of benchmarking. Benchmarking the new version of the entire application might also be necessary of course, but in general I think this process would only take place a very small number of times (following the 20%-80% rule). Doing this process "manually" in these cases is probably easier and more cost-effective than the described system.
  4. What happens when you add features, fix non-performance related bugs etc.? You don't get any benefit from the system.
  5. Running the two versions in conjunction to compare their performance has far more problems than you might think - not only you might have race conditions, but if the input is not an appropriate benchmark, you might get the wrong result (e.g. if you get loads of small data packets and that is in 90% of the time the input is large data packets). Furthermore, it might just be impossible (for example, if the actual code changes the data, you can't run them in conjunction).

The only "environment" where this sounds useful and actually "a must" is a "genetic" system that generates new versions of the code by itself, but that's a whole different story and not really widely applicable...

2008/09/13
3
9/13/2008 6:21:40 PM


Have I seen a similar concept in practice? No. But I'll propose an approach anyway.

It seems like most of your objectives would be meet by some sort of super source control system, which could be implemented with CruiseControl.

CruiseControl can run unit tests to ensure correctness of the new version.

You'd have to write a CruiseControl builder pluggin that would execute the new version of your system against a series of existing benchmarks to ensure that the new version is an improvement.

If the CruiseControl build loop passes, then the new version would be accepted. Such a process would take considerable effort to implement, but I think it feasible. The unit tests and benchmark builder would have to be pretty slick.

2008/09/13

I think an Inversion of Control Container like OSGi or Spring could do most of what you are talking about. (dynamic loading by name)

You could build on top of their stuff. Then implement your code to

  1. divide work units into discrete modules / classes (strategy pattern)
  2. identify each module by unique name and associate a capability with it
  3. when a module is requested it is requested by capability and at random one of the modules with that capability is used.
  4. keep performance stats (get system tick before and after execution and store the result)
  5. if an exception occurs mark that module as do not use and log the exception.

If the modules do their work by message passing you can store the message until the operation completes successfully and redo with another module if an exception occurs.

2008/09/13

For design ideas for high availability systems, check out Erlang.

2008/09/17

I don't think code will learn to be better, by itself. However, some runtime parameters can easily adjust onto optimal values, but that would be just regular programming, right?

About the on-the-fly change, I've shared the wondering and would be building it on top of Lua, or similar dynamic language. One could have parts that are loaded, and if they are replaced, reloaded into use. No rocket science in that, either. If the "old code" is still running, it's perfectly all right, since unlike with DLL's, the file is needed only when reading it in, not while executing code that came from there.

Usefulness? Naa...

2008/09/17

Source: https://stackoverflow.com/questions/60478
Licensed under: CC-BY-SA with attribution
Not affiliated with: Stack Overflow
Email: [email protected]