sourcery_analytics.analysis¶
Compute and aggregate metrics over nodes, source code, files, and directories.
- sourcery_analytics.analysis.analyze_methods(item: Union[str, NodeNG, Path], /, metrics: Union[None, Callable[[FunctionDef], MetricResult], Iterable[Callable[[FunctionDef], MetricResult]]] = None, compounder: Compounder = name_metrics, aggregation: Aggregation[T] = list) T ¶
Extracts methods from
item
then computes and aggregates metrics.- Parameters
item – source code, node, file path, or directory path to analyze
metrics – list of node metrics to compute
compounder – method to combine individual metrics into compound metric
aggregation – method to combine the results
Examples
>>> from sourcery_analytics.metrics import ( ... method_name, ... method_cognitive_complexity, ... ) >>> source = ''' ... def foo(): ... return False ... def bar(): ... if foo(): ... return False ... return True ... ''' >>> analyze_methods( ... source, ... metrics=(method_name, method_cognitive_complexity) ... ) [{'method_name': 'foo', 'method_cognitive_complexity': 0}, {'method_name': ... >>> from sourcery_analytics.metrics.aggregations import average >>> sorted(analyze_methods( ... source, ... metrics=(method_name, method_cognitive_complexity), ... aggregation=average ... )) [('method_cognitive_complexity', 0.5), ('method_name', None)]
- sourcery_analytics.analysis.analyze(nodes: Union[NodeNG, Iterable[NodeNG]], /, metrics: Union[None, Callable[[NodeNG], MetricResult], Iterable[Callable[[NodeNG], MetricResult]]] = None, compounder: Compounder = name_metrics, aggregation: Aggregation[T] = list) T ¶
Computes and aggregates metrics over
nodes
.Examples
>>> from pprint import pprint >>> from sourcery_analytics.metrics import ( ... method_name, ... method_cognitive_complexity, ... method_cyclomatic_complexity, ... ) >>> source = ''' ... def add(x, y): #@ ... return x + y ... ... def div(x, y): #@ ... return None if y == 0 else x / y ... ''' >>> methods = astroid.extract_node(source) >>> pprint( ... analyze( ... methods, ... metrics=( ... method_name, ... method_cognitive_complexity, ... method_cyclomatic_complexity ... ) ... ) ... ) [{'method_cognitive_complexity': 0, 'method_cyclomatic_complexity': 0, 'method_name': 'add'}, {'method_cognitive_complexity': 2, 'method_cyclomatic_complexity': 2, 'method_name': 'div'}]
- sourcery_analytics.analysis.assess(nodes: Union[NodeNG, Iterable[NodeNG]], /, metrics: Union[None, Callable[[NodeNG], MetricResult], Iterable[Callable[[NodeNG], MetricResult]]] = None, threshold_settings: ThresholdSettings = ThresholdSettings()) Iterator[ThresholdBreachDict] ¶
Yields the nodes which breach the thresholds according to the metrics.
- Parameters
nodes – an iterable of nodes, compatible with the metrics
metrics – a collection of metrics, which may have thresholds in the settings
threshold_settings – describes the maximum allowed value for the metrics
Examples
>>> from pprint import pprint >>> from sourcery_analytics.metrics import ( ... method_length, ... method_cyclomatic_complexity ... ) >>> source = ''' ... def bin(xs): ... for x in xs: ... if x < 8: ... yield "small" ... elif x < 10: ... yield "medium" ... elif x < 12: ... yield "large" ... else: ... yield "extra large" ... def call(f, *args, **kwargs): ... return f(*args, **kwargs) ... ''' >>> nodes = extract(source, is_method) >>> metrics = [method_length, method_cyclomatic_complexity] >>> threshold_settings = ThresholdSettings(method_cyclomatic_complexity=2) >>> # note: the above value is unreasonably low >>> pprint( ... list( ... assess( ... nodes, ... metrics=metrics, ... threshold_settings=threshold_settings ... ) ... ) ... ) [{'method_file': '<?>', 'method_lineno': 1, 'method_name': 'bin', 'metric_name': 'method_cyclomatic_complexity', 'metric_value': 5}]
- sourcery_analytics.analysis.melt(results: Iterable[NamedMetricResult], metrics: List[Callable[[NodeNG], MetricResult]]) Iterator[ThresholdBreachDict] ¶
Converts “wide-form” analysis into “long-form” analysis.
For each named metric, of the form {metric_name: metric_value, …}, create a new dictionary {“metric_name”: metric_name, “metric_value”: metric_value}. Keys in the analysis (such as method name) that are not metrics are left unchanged.
Inspired by the functionality of pandas’ .melt() method.
- Parameters
results – an iterable of named metric results, typically the output of a call to
analyze()
metrics – a list of metric functions - should match the names used in the analysis
Examples
>>> from pprint import pprint >>> from sourcery_analytics.metrics import ( ... method_length, ... method_cyclomatic_complexity ... ) >>> source = ''' ... def maturity(cheese): ... if cheese.years > 5: ... return "seriously mature" ... else: ... return "quite mild" ... ''' >>> results = analyze_methods( ... source, metrics=[ ... method_name, ... method_length, ... method_cyclomatic_complexity ... ] ... ) >>> pprint(results) [{'method_cyclomatic_complexity': 2, 'method_length': 3, 'method_name': 'maturity'}] >>> pprint( ... list( ... melt( ... results, ... metrics=[ ... method_length, ... method_cyclomatic_complexity ... ] ... ) ... ) ... ) [{'method_name': 'maturity', 'metric_name': 'method_length', 'metric_value': 3}, {'method_name': 'maturity', 'metric_name': 'method_cyclomatic_complexity', 'metric_value': 2}]
See also