Jiarong Jiang I will be on the job market this fall. Contact: Email: jiarong AT umiacs DOT umd DOT edu |
![]() |
I am currently a Ph.D. student at Department of Computer Science, University of Maryland, College Park. My advisor is Dr. Hal Daumé III. I am also associated with UMIACS (CLIP lab). My research interests are efficient approximate inference, graphical models, and structured prediction, particularly parsing. I got my Bachelor's degree in Mathematics(Information and Computing Science) and second Bachelor's degree in Computer Science from Fudan University, Shanghai, China. | |
Publications Jiarong Jiang, Taesun Moon, Hal Daumé III, Jason Einser, Prioritized Asynchronous Belief Propagation, ICML 2013 Workshop on Inferning. [abstract] | [slides] Jiarong Jiang, Adam Teichert, Hal Daumé III, Jason Eisner, Learned Prioritization for Trading Off Accuracy and Speed, NIPS 12. [abstract] | [paper] | [poster] Users want natural language processing (NLP) systems to be both fast and accurate, but quality often comes at the cost of speed. The field has been manually exploring various speed-accuracy tradeoffs for particular problems or datasets. We aim to explore this space automatically, focusing here on the case of agenda-based syntactic parsing (Kay, 1986). Unfortunately, off-the-shelf reinforcement learning techniques fail to learn good policies: the state space is too large to explore naively. We propose a hybrid reinforcement/apprenticeship learning algorithm that, even with few inexpensive features, can automatically learn weights that achieve competitive accuracies at significant improvements in speed over state-of-the-art baselines. Jiarong Jiang, Hal Daumé III, Q-learning on a multi-state MDP, Learning Workshop, 2012. (talk) Jiarong Jiang, Piyush Rai, Hal Daumé III, Message-Passing for Approximate MAP Inference with Latent Variables, NIPS 2011. [abstract] | [paper] We consider a general inference setting for discrete probabilistic graphical models where we seek maximum a posteriori (MAP) estimates for a subset of the random variables (max nodes), marginalizing over the rest (sum nodes). We present a hybrid message-passing algorithm to accomplish this. The hybrid algorithm passes a mix of sum and max messages depending on the type of source node (sum or max). We derive our algorithm by showing that it falls out as the solution of a particular relaxation of a variational framework. We further show that the Expectation Maximization algorithm can be seen as an approximation to our algorithm. Experimental results on synthetic and real-world datasets, against several baselines, demonstrate the efficacy of our proposed algorithm. Jiarong Jiang, Adam Teichert, Hal Daumé III, Faster, Better, or Both! Learning Priority Functions for Decoding, Mid-Atlantic Student Colloquium on Speech, Language and Learning, 2011. (talk) [abstract] | [slides] Amit Goyal, Jiarong Jiang, Hal Daumé III, Segmenting low-level instructions into high-level instructions, Learning Workshop, 2011. [abstract] Jiarong Jiang, Piyush Rai, Hal Daumé III, Message Passing Algorithm for Marginal-MAP Estimation, Learning Workshop, 2010. [abstract] |
|
Miscellaneous Python Crash Course (Day 2), Language Science Winter Storm 2013. [slides | source code (.tar, .zip) | solution (.tar, .zip)] Marginal-MAP source code [Coming soon] A Java wrapper for Evalb [Here] | |
Last update: Sep 20, 2012
Useful Links: [NLP Mtg Deadlines] |
|