PGMs don't get the same sexy treatment that ML and AI seem to get in pop science articles, so it may be worth stating explicitly that they're very much used in ML and are intellectually fascinating in their own right. A graphical model can fully describe the distribution and dependences of a model. Why this is important: a graphical model makes it very easy to give a computer your model, and there has been great success over the past two decades in doing just this [1].
Further, Daphne Koller is a serious force in the field, and seems to be a pretty good supervisor, so I'm guessing/hoping she is an interesting/engaging lecturer as well. Though, Stanford CS/Stats students are more able to comment on this last point.
For me, as non native English speaker is more confusing.
In Portuguese we say grafos(graphs) and gráficos, when we refer to pics and images. And when I read something in English, I wonder if its graphs referring to maths or to pics.
> Further, Daphne Koller is a serious force in the field, and seems to be a pretty good supervisor, so I'm guessing/hoping she is an interesting/engaging lecturer as well. Though, Stanford CS/Stats students are more able to comment on this last point.
What's the source? She's a brilliant researcher, but I've heard quite the opposite about her attitude towards human relationships...
I did research with Daphne (and co-authored a paper with her) in my senior year of undergrad, and she demanded a high standard of work, yes, but she was an excellent supervisor. Everybody knows how brilliant she is, but she also put a lot of effort into teaching my (also undergrad) project partner and I about how to formulate a research problem, how to do research, and how to present research. The primary concern appeared to be our personal growth, not the research machine (though that's not to say that the research wasn't important).
Working with her was one of the highlights of my undergrad education, and her class was great, too.
Ah, so I used a 2-degree heuristic to come to that conclusion--I haven't had any first hand experience with her, nor do I have contact with her former students. A few stats professors independently recommended her to me as a supervisor, her students seem to do well, and her research page is more welcoming than most (versus, say Ullman's page: http://infolab.stanford.edu/~ullman/ or say, read Brian Ripley's posts on the R mailing list). The one thing I'd add: my experience has been that academics generally have less empathy than others; I'd be interested to hear from old students how she compares to other faculty.
I know a lot of Daphne's former and current students and she seems to be a great advisor who genuinely cares about producing both excellent research and top quality research talent.
Further, in the department, I think she is one of the people who cares most about teaching. She runs the undergraduate summer research program. She re-does her class on PGMs substantially almost every time she teaches it to try to make it better. (Though such a high rate of change may or may not be a good idea.) Daphne is almost certainly one of the key people behind the *-class effort at Stanford CS.
For those with a negative impression of Daphne, my guess is just that they are misinterpreting her directness. If she thinks you're wrong, or you're doing the wrong research, you'll know about it.
(Also, Ullman is one of the nicest people in the department in person, which is crazy given that he wrote the standard texts in compilers, databases, and arguably automata. He's emeritus these days though.)
If you are interested in learning this topic, do not go directly to Koller/Friedman book - it sure contains a lot of material but its presentation is not well integrated (just look at the topic dependency graph at the beginning).
A much more cohesive introduction would be Michael Jordan's book draft that has been floating around for nearly a decade now. You can find some of the older versions online, e.g. http://www.cs.cmu.edu/~lebanon/pub/book
Go directly to chapter 5 to see how the language of PGMs can help to clarify a lot of standard material in stats and ML.
And for a nice overview of how factor graphs, when considered generally, really can capture arbitrary dependencies I would check out "Extending factor graphs so as to unify directed and undirected graphical models" http://uai.sis.pitt.edu/papers/03/p257-frey.pdf.
Was this (Jordan's) book published? If yes, what is it called? He seems to have a couple of other books on Graphical models, but the contents of those book don't line up with the ones at the link above.
This book does not exist as a published volume, at least not in the way it is presented in this draft. This is why it tends to circulate as a draft, rather than as (say) an Amazon link.
Thank You. I spent some time looking for it. And if anyone who has an up to date draft wants to send me a copy, my email is in my profile, Thanks in Advance :).
Based upon reading the course description, if you can grok Daphne + Nir's book, then you won't learn anything from the class.
"This class does require some abstract thinking and mathematical skills. However, it is designed to require fairly little background, and a motivated student can pick up the background material as the concepts are introduced. We hope that, using our new learning platform, it should be possible for everyone to understand all of the core material."
and "For additional depth, you can refer to the best-selling textbook, _Probabilistic Graphical Models: Principles and Techniques_ by Daphne and Nir Friedman."
I can grok the book, but I can't read it like it's a vampire novel and remember anything useful... hence the reason to sign up and have something of a schedule to follow.
I signed up for the course. I have a good general background in AI but my knowledge of probabilistic graph models is thin - hoping to quickly fill in the gaps.
I thought about buying the book but thought that listening to the lectures and doing the assignments would be more fun.
You mean as opposed to pgm-class.online.stanford.edu? Well they're cheap, it makes the urls shorter, it's less confusing. Just good design. But too, I guess the ai-class is run by one of the teachers' startups, so it may be to separate them from Stanford, for that reason and to make it clear that you don't get Stanford credit.
From a SEO standpoint, it would make more sense to have them all hosted at stanford.edu/someclass, since the classes would then benefit from the "trust-factor" of the root domain. You'll notice that most of Google's web properties are hosted at google.com/something and not something.google.com or a separate domain, presumably for the same reason.
Consider: maps., docs., mail., plus., groups., translate., books., scholar. There are probably a bunch more -- I don't think Google is scared of subdomains.
Further, Daphne Koller is a serious force in the field, and seems to be a pretty good supervisor, so I'm guessing/hoping she is an interesting/engaging lecturer as well. Though, Stanford CS/Stats students are more able to comment on this last point.
[1] http://www.mrc-bsu.cam.ac.uk/bugs/winbugs/contents.shtml