meshed.tests.test_getitem

meshed.tests.test_getitem.classifier_score(confusion_count, confusion_value)[source]

Compute a score for a classifier that produced the confusion_count, based on the given confusion_value. Meant to be curried by fixing the confusion_value dict.

The function is purposely general – it is not specific to binary classifier outcomes, or even any classifier outcomes. It simply computes a normalized dot product, depending on the inputs keys to align values to multiply and considering a missing key as an expression of a null value.

meshed.tests.test_getitem.confusion_count(prediction, truth)[source]

Get a dict containing the counts of all combinations of predicction and corresponding truth values.

meshed.tests.test_getitem.dot_product(a, b)[source]
>>> dot_product({'a': 1, 'b': 2, 'c': 3}, {'b': 4, 'c': -1, 'd': 'whatever'})
5
meshed.tests.test_getitem.predict_proba(model, X_test)[source]

Get the prediction_proba scores of a model given some test data

meshed.tests.test_getitem.prediction(predict_proba, threshold)[source]

Get an array of predictions from thresholding the scores of predict_proba array.