I am interested in what sort of metrics are there that try to give a measure of how complex a given graph is, what are the corresponding algorithms, and what is their time complexity. A short description or list of relevant paper would be great.
It might help to say that I have two graphs and I want to somehow tell which one is ``more complex." I will use this metric as a heuristic, so I would like to try various metrics on empirical data. (It might help even more if I say that those graphs represent two FSMs.)
Asked By : bellpeace
Answered By : Realz Slaw
You can use all sorts of properties of a graph this way; obviously how good they are for you depends on your purpose.
Non-comprehensive list of graph attributes, for general undirected graphs:
Compare simple graph attributes: $|V|$, $|E|$, $\frac{|E|}{|V|}$; you must decide if more or less is considered "more complex".
Obviously if the graph has multiple components, you can use that, as a measure of simplicity or complexity; and/or you can consider it multiple separate graphs.
Treewidth of the undirected graph; computing the treewidth of a graph is exponential-time in the treewidth, $k$; however, you can set $k$ to a constant, and compute the treewidth quickly (wrt. size of the graph) and consider graphs with higher treewidth to be "complex"
How "planar" a graph is
- Crossing number (NP-complete problem to compute)
- Compare the resulting Kuratowski subgraphs when planarity testing fails (such forbidden graphs can be extracted quickly, but there can be a large number of such graphs)
Presumably a higher number means that there are more non-planar crossings required, thus it is more complex.
Sparseness, average vertex degree, vertex degree distribution
A low average degree, with a nice distribution can indicate an evenly sparse graph, thus perhaps lower "complexity". Easy to compute, simply average the degree of each vertex over all of them; distribution is slightly more complicated I suppose; depends on how you choose to do it; see wikipedia on degree distribution.
Average eccentricity (see wikipedia on Distance (graph theory)), eccentricity distribution
Diameter, radius ratios. Example: large diameter, low radius (see (see wikipedia on Distance (graph theory)); test ratios of these on "complex" and non-"complex" graphs, see if it diffrentiates.
Compare the dimensions of the graphs (NP-hard)
Have some subgraphs you consider "complex", and test for subgraph isomorphism in your actual graphs (NP-complete), or, use Maximum common subgraph isomorphism (NP-hard)
- Perhaps some sort of compressibility/entropy measure, but this touches on isomorphism; the same graph can be layed-out or represented in a matrix multiple ways, not all of them equally compressible. A canonical graph labeling could help with this, but that is itself difficult.
Some ideas for an FSM:
- Size (though note that you can have equivalent FSMs of different sizes; these measurements are about the graph complexity, not necessarily FSM complexity)
- You can do a topological sort or level structure, and compare them that way; possible measures of complexity: depth, level-width.
You can do a DFS, and detect cycles; perhaps more cycles == more complex?
Counting cycles can be a hard problem (NP-hard). Perhaps you can find some relaxation for your specific graphs, for example if they are planar (I don't know that counting cycles in planar graph is any easier, but you would start searching for special cases based on what special cases you do know). There are some approximation algorithms for general cycle counting though. See Counting cycles in an undirected graph using DFS-XOR algorithm, (IEEE paywalled off); abstract:
We present an algorithm for counting the number of cycles in an undirected graph. The given algorithm generates exact results but it is not guaranteed to run in a polynomial time. Afterwards, an approximated version from the algorithm guaranteed to run in a polynomial time was introduced. The obtained results was used to measure the entropy of graphs. Entropy represents robustness of the graph under analysis. An experiment is conducted to compare the results of our algorithm with the results of another algorithm based on the Donald Johnson backtracking algorithm.
Best Answer from StackOverflow
Question Source : http://cs.stackexchange.com/questions/16512
0 comments:
Post a Comment
Let us know your responses and feedback