Resolving Conflicting Hypothesis Tests in Machine Learning World – Jeff Chen, Viaplay Group

Session Outline

After running an elegant A/B Test or a randomized control trial, imagine two or more different hypothesis do not necessarily agree. In some instances, some data scientists opt to apply heuristics to the tests, which in effect add bias to inference. In this session, Viaplay’s SVP of Data Science Jeff Chen will share a new approach to bring together supervised learning and simulation-driven

Key Takeaways:

  • Hypothesis tests can draw on classification-based methods to vastly improve power, particularly in likelihood free cases.
  • Mastering simulation methods vastly improves creativity in engineering new hypothesis tests.
  • Creating new classifier based tests with higher power enables improved discovery of new opportunities.

Add comment