Sunday, July 4, 2010

AISTATS 2010 Highlights

The conference kicked off with
Forensic Statistics: Where are We and Where are We Going?
Richard Gill
To append to Sebastien's comment the statistical flaws introduced by the doctors and lawyers in the case included: confusion that P(data|guilty) = P(guilty|data), multiplying p-vals over multiple tests, the post-hoc problem in frequentist hypothesis testing, arbitrary starting and stopping rules, violated iid assumptions, and fitting the protocol to define events to maximize significance.

I liked
Reduced-rank hidden Markov models
S. Siddiqi, B. Boots and G. Gordon
some cool stuff about alternatives to EM for training HMMs that are closed form with bounds on the loss of statistical efficiency compared to EM and without EM's local optima problem.

Gaussian processes with monotonicity information Jaakko Riihimäki, Aki Vehtari ; 9:645-652, 2010.

I learned what Phil Hennig is up to in
Coherent inference on optimal play in game trees
P. Hennig, D. Stern and T. Graepel

Zoubin pulled in another best paper award in
Learning the structure of deep sparse graphical models
R. Adams, H. Wallach and Z. Ghahramani
Adams and Wallach used some of the MacKay magic.

On the Sunday after the conference there was the active learning workshop. Don Rubin (co-inventor of EM and co-author on Gelman) was the invited speaker who talked about experimental design and causality. He is definetly on the stats side of the ai-stats. I asked him about measuring test set performance and he asked why I would want to divide my data set in half (definitely out of sync with the ML culture!). It was also interesting to see the philosophical cracks between him and Phil Dawid and the division between the Dawid/Rubin/Pearl views on causal inference.

No comments: