I went for running about 10 km this morning in the heavy wind and rain on the second day of San Francisco. Since I saw many local runners in such bad weather condition, I came to believe that running even in the bad condition is the spirit of “bay area”. I truly became a runner of “bay area” in spirit.
By the way, I am attending the Association for the Advancement of Artificial Intelligence (AAAI), this article is about the sessions on the second day. The report for the first day is here.
In this series of articles, I am reporting the sessions in AAAI.
This talk was given by Steve Young at Cambridge University. It covered from the basic introduction about the “dialog system“ to the advanced architecture and the learning method of the model with deep learning (CNN, LSTM).
Figure 1:Extraction of lexical features with CNN.
The presentation seemed to be easy for beginners, but unfortunately I could not understand except that creating the lexical feature from the text is useful. Do not ask me about this session any more…
The title itself seems to be a 101 basic talk in generating feature, but it was actually very deeper that I expected.
He introduced Random Features which convert the high dimensional data into low dimensional feature assuming the shift-invariant kernel.
He also mentioned the new algorithm which added moment matching to the original one, and the method of noise reduction with the better performance using PCA in frequency space.
The meaning of “Feature Construction” here is the construction of features in frequency space or low dimensional representation rather than “creating feature” in classical prediction.
Since the movement of the arm of humans or robots can be assumed to be on the hyper-spheare or hyper-torus, he invented the method to estimate the movement with generalized von Mises distribution to the multivariate estimation.
Another talk was about bayesian network which works with the non-random missing value.
I’d like to read “statistical science of survey observation” by Dr. Hoshino again.
I decided to attend this session because my colleague attended the other session. The session was about these topics.
One attendee argued “the proposed method was not sure to converge to the global optimal. There might be some problems in mathematical optimization or in the problem setting.”
I will follow the research about the clustering with centroids.
As I wanted to make today the day of clustering, I went to see the sessions of clustering. I saw the following topics.
These are very interesting and I hope to use them for my work project.
In particular, since ensemble clustering method seems intriguing, I will look further into it.
The talk was presented by Peter Dayan at London University. To sum up, the enforcement learning in AI is similar to the learning mechanics of neurons in brains. He introduce it with some examples, and argued that problem setting, estimation method and environment are all essential to experiments.
Figure 2: The movie about the learning process in which the pigeons learn to poke the light to feed.
I reported on the second day of AAAI.