Saturday, December 27, 2008

NIPS Report

After attending NIPS 2008 I figure I should write up my impression. For me, some of the highlights were

  1. Shai Ben-David's student, Margareta Ackerman, gave a key note presentations on the quality of clustering. After hearing hear presentation and talking to her at her poster, I was unimpressed. I also think that a lot of the clustering quality stuff is BS. They create a set of axioms to evaluate the quality of clustering algorithm. I find all of them to be somewhat questionable. Comparison of unsupervised algorithms, such as clustering, can be done via comparisons of the marginal likelihood. It seemed that many of the ideas involved in Bayesian model comparison were a foreign language to Ackerman. A full review of the topic deserves its own post.
  2. Han Liu, John Lafferty and Larry Wasserman presented a paper on a joint sparsity regression. It builds on a previous paper where they modify L1 regularization for joint sparsity in multiple regression causing certain factors to have zero influence for all the different regressions. They extend this to the non-parametric case. Each regression is a additive model of nonlinear functions of each input. The joint sparsity model causes certain functions to be zero everywhere for each regression. The regularization is quite strange. L1 regularization is equivalent to a MAP solution with a laplacian prior. I am not sure what equivalent priors these regularization methods have. In the non-parametric single regression case, I think the regularizer is equivalent to a GP prior on the function where the covariance function is a kronecker delta. I have not proven that, however. A degeneracy of this model is that it causes the functions to go to zero for every input that has not been observed. Searching through the paper, I found that they used gaussian kernel smothing on the function afterwards to smooth it out, which seems like a bit of a hack to me. A full review of the topic deserves its own post.
  3. Byron Yu and John P Cunningham presented a paper on Gaussian Process Factor Analysis. They used independent GPs over time as the latent factors and then used a factor analysis like linear transformation to explain the observed time series. They applied to some neural spike data and got quite interesting results. They were able to visualize the acitivty of a monkey's motor cortex in 2D when throwing a ball.
  4. Ben Calderhead and Mark Girolami presented a paper on Accelerating Bayesian Inference over Nonlinear Differential Equations with Gaussian Processes. They were trying to infer the parameters of a nonlinear ODE. It was a little kludgy as their model setup violated the likelihood principle. They modeled the time series of the system state with GPs. They also modeled the derivatives for the system state via the ODE. So, they had two models for the data. They inferred the parameters be trying to minimize the difference between the two. I think they modeled the difference as a Gaussian centered at zero. By inspecting the graphical model, however, you notice that the observed variables are independent of the parameters to the ODE. So, if one wanted to be completely principled you could not use the model to infer anything about the parameters.
  5. Rebecca Saxe gave a presentation that I found quite interesting. She showed a certain brain region that was involved in thinking about how other people where thinking. It was therfore involved in moral judgement, because moral judgements often hinge on intent. She first correlated the activity in the region with people's moral judgements about hypothetical scenario's. She later showed how she was able to use TMS on subjects and change their moral judgements about the hypothetical scenarios.
  6. The was a lot of hype about infer.NET. It will make exploring different models much easier. It seems much more powerful than VIBES or BUGS.
  7. I attended the causality workshop which explored methods for inferring causuation from observational or experiments where you don't have direct control over the variables you'd like to. Judea Pearl gave a very enthusiantic presentation, I am not sure if I would consider it to be good presentation, however. There was some tension in the air between Phillip Dawid and Judea Pearl over their views on causation and have created to camps in the field. I don't think they are as far apart as they think. The divide is not as big as it is between Bayesian and frequentist, for example. Judea Pearl presented his do-calculus for inferering causation in causal graphs, which are derived using a set of axioms. Dawid gave a presentation high lighting what I hope most people already know: conditional independence in graphical models is not neccessarily the same thing as causation and that nothing is as good as a randomized experiment. However, Kevin Murphy, in Dawid's camp, showed one can prove all of the do-calculus rules using the IDAG. If one sets up a graphical model using inputs variables for causation one can derive the do-calculus rules using the standard conditional independence properties of graphical models. Wrapping ones mind around what is the correct aproach for causation is much more difficult and subtle than that for prediction. I beleive this is related to the fact it is much more difficult to get a ground truth when testing causal inference methods. Guyon high lighted this fact in relation to her causality challenge.
  8. Shakir mohamed presented a paper extending PCA to other data types with distributions in the exponential family. Normal PCA works under an assumption of gaussianity in the data. EPCA can assume a bournulli distribution for example.
  9. Jurgen Van Gael presented a paper where he extended the iHMM to a factorial iHMM. He basically went from an HMM to FHMM but with iHMMs. An iHMM is an HMM with an infinite number of latent states. The transition matrix is from a hierarcical DP.
  10. Sebastian Seung gave a presentation to decode the Connectome. The connectome is basically the connection matrix between all the neuron's the brain. It is likely summarizing a brain as a graph with each neuron as a node and each synapse as a edge. The difficulty of the task is converting images of 20-30 nm thick brain slices to a connection matrix. So far they have only done C elegans, which has a mere 300 neurons. With that scientists have reverse engineered a lot C elegans behaviour. They are currently working on decoding a cubic mm of mouse brain. They are using computer vision algorithms to automate and speed up the process. He eluded to the massive amounts of data involved. By my calculations, merely storing the connectome of the human brain would require 432 TB. The imagery would be vastly more. If one had the connectome matrix it would open up tons of possibilities for analysis. I would like to run spectral clustering on the graph and see how closely graph clusters correspond to anatomical structures. Of course, I don't know how one would run spectral clustering (ie do an eigen decomposition) on a matrix that large. Sebastion gave a video with 3D graphics illustrating the imaging process, which seemed like it was for the discovery channel. The star wars music in the background was a bit much ;)
  11. There was a paper on bootstraping the ROC curve. Basically, they are trying to get confidence bounds on a ROC curve. It is important get a sense of confidence on your performance to tbe sure that is was not from random chance. It is interesting to me because I have looked into model based approches to estimating the ROC curve.
Obviously, this is only a small subset of NIPS. However, it will give me a lot of material when it is my turn to present in my gorups weekly meetings. The list of proceedings is here

19 comments:

情緒 said...

i trust everything will be fine. bless you!........................................

努力 said...

thank you for you to make me learn more,thank you∩0∩ ........................................

pink said...

IT IS A VERY NICE SUGGESTION, THANK YOU LOTS! ........................................

水憲妤慧 said...

人生最大的榮耀,不是永遠不敗,而是屢仆屢戰 ..................................................

逸凡逸凡 said...

Many a true word is spoken in jest.......................................................

惠IdellA_Fecteau1231蘋 said...

快樂與人生,是最好的伴侶..................................................

雲亨 said...

blog is great~~祝你人氣高高~........................................

麗泰秋卿 said...

成功的秘訣在於:當機會來臨時,你已準備妥當 ....................................................

奈美 said...

喜歡你的部落格,留言請您繼續加油...............................................................

韋于倫成 said...

加油啦!要繼續發表好文章喔!........................................

偉誠 said...

任何你憂慮的事,你都應該去採取一點行動,不要只是在那邊想..............................

熙辰 said...

一夜聊天室女同聊天聊天室二愛聊天室苗栗聊天66k聊天69性交6k 聊天至6k 聊天館 6k6k天室6k館777貼圖區77p2 影片網77P2P77p2p277p2p本土85ccst街影城2010問題小棧0401影音視訊非會員18h漫貼圖區18p2p帳密18p2p帳號18p2p帳號密碼18r 禁小說18r禁小說18r禁影片18tw台灣18tw情色文學18us線上影片18x免費女優火辣視訊薄紗主播網友自拍露點圖18成人成人韭南籽

于呈均名 said...

學問好比腸胃裡的食物,裝下多少並不重要,吸收多少才重要。......................................................

burtong said...

希望是風雨之夜所現之曉霞 ..................................................

兆以 said...

能猜得出女人真實年齡的男人也許耳聰目明,但肯定毫無大腦。哈哈!...............................................................

DanaL文廷_Moler文廷 said...

愛,拆開來是心和受兩個字。用心去接受對方的一切,用心去愛對方的所有。..................................................................

怡潔怡潔 said...

你真的很棒~謝謝分享囉~......................................................................

陳芳 said...

河水永遠是相同的,可是每一剎那又都是新的。......................................................................

韋以韋以 said...

與人相處不妨多用眼睛說話,多用嘴巴思考.................................................................