Differences

This shows you the differences between two versions of the page.

Link to this comparison view

online:conformanceevaluation [2009/08/17 23:07] (current)
Line 1: Line 1:
 +======= Conformance and Evaluation ======
 +
 +Conformance analysis and Process model evaluation have many things in common. The main difference is that, usually, in conformance analysis on model is given and the match should be assessed, while Process model evaluation accounts for the fact that many models are possible with the same, or very similar, behavior ("​Which one is the best?"​). ​
 +
 +===== Conformance Analysis =====
 +
 +Conformance analysis requires, in addition to an event log, some a-priori model. This model may be handcrafted or obtained through process discovery. Whatever its source, ProM provides various ways of checking whether reality conforms to such a model. For example, there may be a process model indicating that purchase orders of more than one million Euro require two checks. Another example is the checking of the so-called “four-eyes principle”. Conformance analysis may be used to detect deviations, to locate and explain these deviations, and to measure the severity of these deviations.
 +
 +  * The [[conformance_checker|Conformance Checker]] compares an a-priori model with the observed reality stored in some MXML log, and both visualizes and quantifies detected discrepancies. ​
 +  * The LTL Checker Plugin that can be used in the case that there is not a complete a-priori process model but just a set of requirements (e.g., business rules). In the documentation folder of ProM there is a tutorial that describes how they can be used. 
 +  * The Semantic LTL Checker allows to incorporate semantic information from ontologies.
 +
 +===== Process Model Evaluation =====
 +
 +To compare the results obtained from different process discovery algorithms, or to simply find out how well the discovered model captures the observed behavior (how many cases are actually covered etc.), evaluation techniques are needed. ​
 +
 +  * A number of metrics that assess the quality of a (mined) model were developed by different process mining researchers over the years, and the [[online:​controlflowbenchmark|Control Flow Benchmark]] plug-in integrates those that are implemented in ProM. 
 +  * The [[online:​conformance_checker|Conformance Checker]] provides diagnostic visualizations for some of the metrics in the Control Flow Benchmark plug-in. ​
 +  * The [[online:​mdl|Minimum Description Length]] analysis plug-in evaluates process models based on the MDL principle known from the machine learning domain. ​
 +  * Behavioral Precision / Recall plug-in (metrics also available in the Control Flow Benchmark plug-in)
 +  * Structural Precision / Recall plug-in (metrics also available in the Control Flow Benchmark plug-in)
 +===== Other Kinds of Comparison =====
 +
 +  * The [[tracediff|Trace Diff Analysis]] plug-in allows to compare two log traces to each other and visualize the differences. ​
 +
 +  * The [[hmm|HMM Experimenter]] generates logs for different levels of noise based on a given set of process models. Using this set-up, the development of the fitness values (over different levels of noise) can be compared for different metrics. ​  
 +