MorphMorph

MLFlow Tracking

MLFlow server tracking is utilized to measure different data and statistics to analyze how generative AI is performing. Here’re a few things currently being used.

Current MLFlow source code may be found here (See .ReadMe in there on how to use):
https://bitbucket.org/quavo-inc/qfd-datascience-emailclassification/src/master/

You may access to current MLFlow server here (Make sure it’s running, if it’s not contact Andy W.):
http://mlflow.quavo.net:5000/

Statistics currently measured:

Sure! Here are simple definitions for F1 Score, Precision, and Recall:

  1. Precision:

    • Definition: Precision is the ratio of correctly predicted positive observations to the total predicted positives. It tells us how many of the predicted positive cases were actually correct.

    • Formula:

      image-20240614-144211.png

    • Example: If a model predicts 10 positive cases and 7 of them are actually positive, the precision is

      image-20240614-144435.png

  2. Recall (also known as Sensitivity or True Positive Rate):

    • Definition: Recall is the ratio of correctly predicted positive observations to all the observations in the actual class. It measures how well the model can identify all relevant cases.

    • Formula:

    • Example: If there are 20 actual positive cases and the model correctly identifies 15 of them, the recall is

  3. F1 Score:

    • Definition: The F1 Score is the harmonic mean of precision and recall. It provides a single metric that balances both the precision and recall of the model.

    • Formula:

These metrics are often used to evaluate the performance of classification models, especially in cases where the classes are imbalanced.

Â