Friday, November 1, 2019

Relegator working. Now need to test the following:

There is only one thing that I still don't understand about the relegator moons implementation: the significance that is calculated during training is lower (by a factor of 5-ish) than the ultimate significance when applied to the weighted dataset.  This could be because of using the probabilities to weight events in the training signif calculation.  Will look into it.

OK, to check:

  1. run various models on various dataset (i.e., different signal fractions) to see if relegator does acutally give better performance than the binary softmax.  Multiple trials on each dataset to check stability.  run_master.py does this. Running on wintermute rn.  Need to write a script to visualize results.
  2. apply models that have been trained on various randomly generated weighted datasets.  When sig_frac is small, there will be a large amount of statistical variation in n_S between weighted datasets.  So, apply a trained model to many weighted datasets to investigate the variation in analysis power.  Need to write a new script to do this.

No comments:

Post a Comment

Relegator update

Kripa has produced some really nice plots of significance vs decision function threshold for the regressor.  NICE. We also have plots of a...