{"id":170,"date":"2022-04-01T17:56:41","date_gmt":"2022-04-01T17:56:41","guid":{"rendered":"https:\/\/groups.cs.umass.edu\/equate-ml\/?page_id=170"},"modified":"2022-04-01T19:57:54","modified_gmt":"2022-04-01T19:57:54","slug":"machine-learning-retrospective-2021","status":"publish","type":"page","link":"https:\/\/groups.cs.umass.edu\/equate-ml\/machine-learning-retrospective-2021\/","title":{"rendered":"Machine Learning Retrospective, 2021"},"content":{"rendered":"\n<p>With 2021 drawing to a close, we would like to take a moment to recognize the wealth of machine learning research produced by the UMass Manning College of Information and Computer Sciences (CICS). This retrospective provides a brief summary of many (not all) of the machine learning papers published by students and\/or faculty in CICS. You can browse papers by their name in the index below, or can just scroll through to get a sense for all of the work that we are doing!<\/p>\n\n\n\n<hr class=\"wp-block-separator is-style-wide\"\/>\n\n\n\n<div class=\"wp-block-query is-layout-flow wp-block-query-is-layout-flow\">\n<p><\/p>\n\n\n\n<p><\/p>\n\n\n\n<p><\/p>\n\n\n<ul class=\"wp-block-post-template is-layout-flow wp-block-post-template-is-layout-flow\"><li class=\"wp-block-post post-215 post type-post status-publish format-standard has-post-thumbnail hentry category-ai-ml tag-32 tag-aistats tag-paper group-blog hfeed\">\n<h2 class=\"wp-block-post-title\"><a href=\"https:\/\/groups.cs.umass.edu\/equate-ml\/2022\/04\/05\/paper-realmvp-a-change-of-variables-method-for-rectangular-matrix-vector-products\/\" target=\"_self\" >Paper: RealMVP: A Change of Variables Method For Rectangular Matrix-Vector Products<\/a><\/h2>\n\n<figure class=\"alignright wp-block-post-featured-image\"><a href=\"https:\/\/groups.cs.umass.edu\/equate-ml\/2022\/04\/05\/paper-realmvp-a-change-of-variables-method-for-rectangular-matrix-vector-products\/\" target=\"_self\"  ><img fetchpriority=\"high\" decoding=\"async\" width=\"531\" height=\"163\" src=\"https:\/\/groups.cs.umass.edu\/equate-ml\/wp-content\/uploads\/sites\/46\/2022\/04\/Cunningham1.png\" class=\"attachment-post-thumbnail size-post-thumbnail wp-post-image\" alt=\"Paper: RealMVP: A Change of Variables Method For Rectangular Matrix-Vector Products\" style=\"object-fit:cover;\" srcset=\"https:\/\/groups.cs.umass.edu\/equate-ml\/wp-content\/uploads\/sites\/46\/2022\/04\/Cunningham1.png 531w, https:\/\/groups.cs.umass.edu\/equate-ml\/wp-content\/uploads\/sites\/46\/2022\/04\/Cunningham1-300x92.png 300w\" sizes=\"(max-width: 531px) 100vw, 531px\" \/><\/a><\/figure>\n\n<div class=\"wp-block-post-excerpt\"><p class=\"wp-block-post-excerpt__excerpt\">Rectangular matrix-vector products are used extensively throughout machine learning and are fundamental to neural networks such as multi-layer perceptrons, but are notably absent as normalizing flow layers. This paper identifies this methodological gap and plugs it with a tall and wide MVP change of variables formula. Our theory builds up to a practical algorithm that&hellip; <\/p><\/div>\n<\/li><li class=\"wp-block-post post-217 post type-post status-publish format-standard has-post-thumbnail hentry category-ai-ml tag-32 tag-icml tag-paper group-blog hfeed\">\n<h2 class=\"wp-block-post-title\"><a href=\"https:\/\/groups.cs.umass.edu\/equate-ml\/2022\/04\/05\/paper-high-confidence-generalization-for-reinforcement-learning\/\" target=\"_self\" >Paper: High Confidence Generalization for Reinforcement Learning<\/a><\/h2>\n\n<figure class=\"alignright wp-block-post-featured-image\"><a href=\"https:\/\/groups.cs.umass.edu\/equate-ml\/2022\/04\/05\/paper-high-confidence-generalization-for-reinforcement-learning\/\" target=\"_self\"  ><img decoding=\"async\" width=\"446\" height=\"278\" src=\"https:\/\/groups.cs.umass.edu\/equate-ml\/wp-content\/uploads\/sites\/46\/2022\/04\/Kostas1.png\" class=\"attachment-post-thumbnail size-post-thumbnail wp-post-image\" alt=\"Paper: High Confidence Generalization for Reinforcement Learning\" style=\"object-fit:cover;\" srcset=\"https:\/\/groups.cs.umass.edu\/equate-ml\/wp-content\/uploads\/sites\/46\/2022\/04\/Kostas1.png 446w, https:\/\/groups.cs.umass.edu\/equate-ml\/wp-content\/uploads\/sites\/46\/2022\/04\/Kostas1-300x187.png 300w\" sizes=\"(max-width: 446px) 100vw, 446px\" \/><\/a><\/figure>\n\n<div class=\"wp-block-post-excerpt\"><p class=\"wp-block-post-excerpt__excerpt\">We present several classes of reinforcement learning algorithms that safely generalize to Markov decision processes (MDPs) not seen during training. Specifically, we study the setting in which some set of MDPs is accessible for training. For various definitions of safety, our algorithms give probabilistic guarantees that agents can safely generalize to MDPs that are sampled&hellip; <\/p><\/div>\n<\/li><li class=\"wp-block-post post-220 post type-post status-publish format-standard has-post-thumbnail hentry category-ai-ml tag-32 tag-icml tag-paper group-blog hfeed\">\n<h2 class=\"wp-block-post-title\"><a href=\"https:\/\/groups.cs.umass.edu\/equate-ml\/2022\/04\/05\/paper-on-the-difficulty-of-unbiased-alpha-divergence-minimization\/\" target=\"_self\" >Paper: On the Difficulty of Unbiased Alpha Divergence Minimization<\/a><\/h2>\n\n<figure class=\"alignright wp-block-post-featured-image\"><a href=\"https:\/\/groups.cs.umass.edu\/equate-ml\/2022\/04\/05\/paper-on-the-difficulty-of-unbiased-alpha-divergence-minimization\/\" target=\"_self\"  ><img decoding=\"async\" width=\"624\" height=\"277\" src=\"https:\/\/groups.cs.umass.edu\/equate-ml\/wp-content\/uploads\/sites\/46\/2022\/04\/Geffner1.png\" class=\"attachment-post-thumbnail size-post-thumbnail wp-post-image\" alt=\"Paper: On the Difficulty of Unbiased Alpha Divergence Minimization\" style=\"object-fit:cover;\" srcset=\"https:\/\/groups.cs.umass.edu\/equate-ml\/wp-content\/uploads\/sites\/46\/2022\/04\/Geffner1.png 624w, https:\/\/groups.cs.umass.edu\/equate-ml\/wp-content\/uploads\/sites\/46\/2022\/04\/Geffner1-300x133.png 300w\" sizes=\"(max-width: 624px) 100vw, 624px\" \/><\/a><\/figure>\n\n<div class=\"wp-block-post-excerpt\"><p class=\"wp-block-post-excerpt__excerpt\">Short description: Variational inference approximates a target distribution with a simpler one. While traditional inference minimizes the \u201cinclusive\u201d KL-divergence, several algorithms have recently been proposed to minimize other divergences. Experimentally, however, these algorithms often seem to fail to converge. In this paper we analyze the variance of the underlying estimators for these papers. Our results&hellip; <\/p><\/div>\n<\/li><li class=\"wp-block-post post-222 post type-post status-publish format-standard has-post-thumbnail hentry category-ai-ml tag-32 tag-icml tag-paper group-blog hfeed\">\n<h2 class=\"wp-block-post-title\"><a href=\"https:\/\/groups.cs.umass.edu\/equate-ml\/2022\/04\/05\/paper-posterior-value-functions-hindsight-baselines-for-policy-gradient-methods\/\" target=\"_self\" >Paper: Posterior Value Functions: Hindsight Baselines for Policy Gradient Methods<\/a><\/h2>\n\n<figure class=\"alignright wp-block-post-featured-image\"><a href=\"https:\/\/groups.cs.umass.edu\/equate-ml\/2022\/04\/05\/paper-posterior-value-functions-hindsight-baselines-for-policy-gradient-methods\/\" target=\"_self\"  ><img loading=\"lazy\" decoding=\"async\" width=\"1200\" height=\"688\" src=\"https:\/\/groups.cs.umass.edu\/equate-ml\/wp-content\/uploads\/sites\/46\/2022\/04\/Nota1-1200x688.png\" class=\"attachment-post-thumbnail size-post-thumbnail wp-post-image\" alt=\"Paper: Posterior Value Functions: Hindsight Baselines for Policy Gradient Methods\" style=\"object-fit:cover;\" srcset=\"https:\/\/groups.cs.umass.edu\/equate-ml\/wp-content\/uploads\/sites\/46\/2022\/04\/Nota1-1200x688.png 1200w, https:\/\/groups.cs.umass.edu\/equate-ml\/wp-content\/uploads\/sites\/46\/2022\/04\/Nota1-300x172.png 300w, https:\/\/groups.cs.umass.edu\/equate-ml\/wp-content\/uploads\/sites\/46\/2022\/04\/Nota1-1024x587.png 1024w, https:\/\/groups.cs.umass.edu\/equate-ml\/wp-content\/uploads\/sites\/46\/2022\/04\/Nota1-768x441.png 768w, https:\/\/groups.cs.umass.edu\/equate-ml\/wp-content\/uploads\/sites\/46\/2022\/04\/Nota1-1536x881.png 1536w, https:\/\/groups.cs.umass.edu\/equate-ml\/wp-content\/uploads\/sites\/46\/2022\/04\/Nota1.png 1553w\" sizes=\"(max-width: 1200px) 100vw, 1200px\" \/><\/a><\/figure>\n\n<div class=\"wp-block-post-excerpt\"><p class=\"wp-block-post-excerpt__excerpt\">Hindsight allows reinforcement learning agents to leverage new observations to make inferences about earlier states and transitions. In this paper, we exploit the idea of hindsight and introduce posterior value functions. Posterior value functions are computed by inferring the posterior distribution over hidden components of the state in previous timesteps and can be used to&hellip; <\/p><\/div>\n<\/li><li class=\"wp-block-post post-224 post type-post status-publish format-standard has-post-thumbnail hentry category-ai-ml tag-32 tag-icml tag-paper group-blog hfeed\">\n<h2 class=\"wp-block-post-title\"><a href=\"https:\/\/groups.cs.umass.edu\/equate-ml\/2022\/04\/05\/paper-towards-practical-mean-bounds-for-small-samples\/\" target=\"_self\" >Paper: Towards Practical Mean Bounds for Small Samples<\/a><\/h2>\n\n<figure class=\"alignright wp-block-post-featured-image\"><a href=\"https:\/\/groups.cs.umass.edu\/equate-ml\/2022\/04\/05\/paper-towards-practical-mean-bounds-for-small-samples\/\" target=\"_self\"  ><img loading=\"lazy\" decoding=\"async\" width=\"624\" height=\"289\" src=\"https:\/\/groups.cs.umass.edu\/equate-ml\/wp-content\/uploads\/sites\/46\/2022\/04\/Phan1.png\" class=\"attachment-post-thumbnail size-post-thumbnail wp-post-image\" alt=\"Paper: Towards Practical Mean Bounds for Small Samples\" style=\"object-fit:cover;\" srcset=\"https:\/\/groups.cs.umass.edu\/equate-ml\/wp-content\/uploads\/sites\/46\/2022\/04\/Phan1.png 624w, https:\/\/groups.cs.umass.edu\/equate-ml\/wp-content\/uploads\/sites\/46\/2022\/04\/Phan1-300x139.png 300w\" sizes=\"(max-width: 624px) 100vw, 624px\" \/><\/a><\/figure>\n\n<div class=\"wp-block-post-excerpt\"><p class=\"wp-block-post-excerpt__excerpt\">Historically, to bound the mean for small sample sizes, practitioners have had to choose between using methods with unrealistic assumptions about the unknown distribution (e.g., Gaussianity) and methods like Hoeffding\u2019s inequality that use weaker assumptions but produce much looser (wider) intervals. In 1969, Anderson proposed a mean confidence interval strictly better than or equal to&hellip; <\/p><\/div>\n<\/li><li class=\"wp-block-post post-227 post type-post status-publish format-standard has-post-thumbnail hentry category-ai-ml tag-32 tag-icml tag-paper group-blog hfeed\">\n<h2 class=\"wp-block-post-title\"><a href=\"https:\/\/groups.cs.umass.edu\/equate-ml\/2022\/04\/05\/paper-how-and-why-to-use-experimental-data-to-evaluate-methods-for-observational-causal-inference\/\" target=\"_self\" >Paper: How and Why to Use Experimental Data to Evaluate Methods for Observational Causal Inference<\/a><\/h2>\n\n<figure class=\"alignright wp-block-post-featured-image\"><a href=\"https:\/\/groups.cs.umass.edu\/equate-ml\/2022\/04\/05\/paper-how-and-why-to-use-experimental-data-to-evaluate-methods-for-observational-causal-inference\/\" target=\"_self\"  ><img loading=\"lazy\" decoding=\"async\" width=\"538\" height=\"215\" src=\"https:\/\/groups.cs.umass.edu\/equate-ml\/wp-content\/uploads\/sites\/46\/2022\/04\/Gentzel1.png\" class=\"attachment-post-thumbnail size-post-thumbnail wp-post-image\" alt=\"Paper: How and Why to Use Experimental Data to Evaluate Methods for Observational Causal Inference\" style=\"object-fit:cover;\" srcset=\"https:\/\/groups.cs.umass.edu\/equate-ml\/wp-content\/uploads\/sites\/46\/2022\/04\/Gentzel1.png 538w, https:\/\/groups.cs.umass.edu\/equate-ml\/wp-content\/uploads\/sites\/46\/2022\/04\/Gentzel1-300x120.png 300w\" sizes=\"(max-width: 538px) 100vw, 538px\" \/><\/a><\/figure>\n\n<div class=\"wp-block-post-excerpt\"><p class=\"wp-block-post-excerpt__excerpt\">Rectangular matrix-vector products are used extensively throughout machine learning and are fundamental to neural networks such as multi-layer perceptrons, but are notably absent as normalizing flow layers. This Methods that infer causal dependence from observational data are central to many areas of science, including medicine, economics, and the social sciences. We describe and analyze observational&hellip; <\/p><\/div>\n<\/li><li class=\"wp-block-post post-230 post type-post status-publish format-standard has-post-thumbnail hentry category-ai-ml tag-32 tag-icml tag-paper group-blog hfeed\">\n<h2 class=\"wp-block-post-title\"><a href=\"https:\/\/groups.cs.umass.edu\/equate-ml\/2022\/04\/05\/paper-deepwalking-backwards-from-node-embeddings-back-to-graphs\/\" target=\"_self\" >Paper: DeepWalking Backwards: From Node Embeddings Back to Graphs<\/a><\/h2>\n\n<figure class=\"alignright wp-block-post-featured-image\"><a href=\"https:\/\/groups.cs.umass.edu\/equate-ml\/2022\/04\/05\/paper-deepwalking-backwards-from-node-embeddings-back-to-graphs\/\" target=\"_self\"  ><img loading=\"lazy\" decoding=\"async\" width=\"964\" height=\"725\" src=\"https:\/\/groups.cs.umass.edu\/equate-ml\/wp-content\/uploads\/sites\/46\/2022\/04\/Chanpuriya1.png\" class=\"attachment-post-thumbnail size-post-thumbnail wp-post-image\" alt=\"Paper: DeepWalking Backwards: From Node Embeddings Back to Graphs\" style=\"object-fit:cover;\" srcset=\"https:\/\/groups.cs.umass.edu\/equate-ml\/wp-content\/uploads\/sites\/46\/2022\/04\/Chanpuriya1.png 964w, https:\/\/groups.cs.umass.edu\/equate-ml\/wp-content\/uploads\/sites\/46\/2022\/04\/Chanpuriya1-300x226.png 300w, https:\/\/groups.cs.umass.edu\/equate-ml\/wp-content\/uploads\/sites\/46\/2022\/04\/Chanpuriya1-768x578.png 768w\" sizes=\"(max-width: 964px) 100vw, 964px\" \/><\/a><\/figure>\n\n<div class=\"wp-block-post-excerpt\"><p class=\"wp-block-post-excerpt__excerpt\">We investigate whether node embeddings, which are vector representations of graph nodes, can be inverted to approximately recover the graph used to generate them. We present algorithms that invert embeddings from the popular DeepWalk method. In experiments on real-world networks, we find that significant information about the original graph, such as specific edges, is often&hellip; <\/p><\/div>\n<\/li><li class=\"wp-block-post post-232 post type-post status-publish format-standard hentry category-ai-ml tag-32 tag-icml tag-paper group-blog hfeed\">\n<h2 class=\"wp-block-post-title\"><a href=\"https:\/\/groups.cs.umass.edu\/equate-ml\/2022\/04\/05\/paper-faster-kernel-matrix-algebra-via-density-estimation\/\" target=\"_self\" >Paper:  Faster Kernel Matrix Algebra via Density Estimation<\/a><\/h2>\n\n\n\n<div class=\"wp-block-post-excerpt\"><p class=\"wp-block-post-excerpt__excerpt\">Consider an n x n Gaussian kernel matrix corresponding to n input points in d dimensions. We show that one can compute a relative error approximation to the sum of entries in this matrix in just O(dn^{2\/3}) time. This is significantly sublinear in the number of entries in the matrix \u2013 which is n^2. Our&hellip; <\/p><\/div>\n<\/li><li class=\"wp-block-post post-234 post type-post status-publish format-standard has-post-thumbnail hentry category-ai-ml tag-32 tag-neurips tag-paper group-blog hfeed\">\n<h2 class=\"wp-block-post-title\"><a href=\"https:\/\/groups.cs.umass.edu\/equate-ml\/2022\/04\/05\/paper-structural-credit-assignment-in-neural-networks-using-reinforcement-learning\/\" target=\"_self\" >Paper:  Structural Credit Assignment in Neural Networks using Reinforcement Learning<\/a><\/h2>\n\n<figure class=\"alignright wp-block-post-featured-image\"><a href=\"https:\/\/groups.cs.umass.edu\/equate-ml\/2022\/04\/05\/paper-structural-credit-assignment-in-neural-networks-using-reinforcement-learning\/\" target=\"_self\"  ><img loading=\"lazy\" decoding=\"async\" width=\"624\" height=\"266\" src=\"https:\/\/groups.cs.umass.edu\/equate-ml\/wp-content\/uploads\/sites\/46\/2022\/04\/Gupta1.png\" class=\"attachment-post-thumbnail size-post-thumbnail wp-post-image\" alt=\"Paper:  Structural Credit Assignment in Neural Networks using Reinforcement Learning\" style=\"object-fit:cover;\" srcset=\"https:\/\/groups.cs.umass.edu\/equate-ml\/wp-content\/uploads\/sites\/46\/2022\/04\/Gupta1.png 624w, https:\/\/groups.cs.umass.edu\/equate-ml\/wp-content\/uploads\/sites\/46\/2022\/04\/Gupta1-300x128.png 300w\" sizes=\"(max-width: 624px) 100vw, 624px\" \/><\/a><\/figure>\n\n<div class=\"wp-block-post-excerpt\"><p class=\"wp-block-post-excerpt__excerpt\">Consider an n x n Gaussian kernel matrix corresponding to n input points in d dimensions. We show that one In this work, we revisit REINFORCE and investigate if we can leverage other reinforcement learning approaches to improve learning. We formalize training a neural network as a finite-horizon reinforcement learning problem and discuss how this&hellip; <\/p><\/div>\n<\/li><li class=\"wp-block-post post-236 post type-post status-publish format-standard has-post-thumbnail hentry category-ai-ml tag-32 tag-neurips tag-paper group-blog hfeed\">\n<h2 class=\"wp-block-post-title\"><a href=\"https:\/\/groups.cs.umass.edu\/equate-ml\/2022\/04\/05\/paper-universal-off-policy-evaluation\/\" target=\"_self\" >Paper: Universal Off-Policy Evaluation<\/a><\/h2>\n\n<figure class=\"alignright wp-block-post-featured-image\"><a href=\"https:\/\/groups.cs.umass.edu\/equate-ml\/2022\/04\/05\/paper-universal-off-policy-evaluation\/\" target=\"_self\"  ><img loading=\"lazy\" decoding=\"async\" width=\"624\" height=\"196\" src=\"https:\/\/groups.cs.umass.edu\/equate-ml\/wp-content\/uploads\/sites\/46\/2022\/04\/Chandak1.png\" class=\"attachment-post-thumbnail size-post-thumbnail wp-post-image\" alt=\"Paper: Universal Off-Policy Evaluation\" style=\"object-fit:cover;\" srcset=\"https:\/\/groups.cs.umass.edu\/equate-ml\/wp-content\/uploads\/sites\/46\/2022\/04\/Chandak1.png 624w, https:\/\/groups.cs.umass.edu\/equate-ml\/wp-content\/uploads\/sites\/46\/2022\/04\/Chandak1-300x94.png 300w\" sizes=\"(max-width: 624px) 100vw, 624px\" \/><\/a><\/figure>\n\n<div class=\"wp-block-post-excerpt\"><p class=\"wp-block-post-excerpt__excerpt\">When faced with sequential decision-making problems, it is often useful to be able to predict what would happen if decisions were made using a new policy. Those predictions must often be based on data collected under some previously used decision-making rule. Many previous methods enable such off-policy (or counterfactual) estimation of the expected value of&hellip; <\/p><\/div>\n<\/li><li class=\"wp-block-post post-238 post type-post status-publish format-standard has-post-thumbnail hentry category-ai-ml tag-32 tag-neurips tag-paper group-blog hfeed\">\n<h2 class=\"wp-block-post-title\"><a href=\"https:\/\/groups.cs.umass.edu\/equate-ml\/2022\/04\/05\/paper-mcmc-variational-inference-via-uncorrected-hamiltonian-annealing\/\" target=\"_self\" >Paper: MCMC Variational Inference via Uncorrected Hamiltonian Annealing<\/a><\/h2>\n\n<figure class=\"alignright wp-block-post-featured-image\"><a href=\"https:\/\/groups.cs.umass.edu\/equate-ml\/2022\/04\/05\/paper-mcmc-variational-inference-via-uncorrected-hamiltonian-annealing\/\" target=\"_self\"  ><img loading=\"lazy\" decoding=\"async\" width=\"220\" height=\"159\" src=\"https:\/\/groups.cs.umass.edu\/equate-ml\/wp-content\/uploads\/sites\/46\/2022\/04\/Geffner2.png\" class=\"attachment-post-thumbnail size-post-thumbnail wp-post-image\" alt=\"Paper: MCMC Variational Inference via Uncorrected Hamiltonian Annealing\" style=\"object-fit:cover;\" \/><\/a><\/figure>\n\n<div class=\"wp-block-post-excerpt\"><p class=\"wp-block-post-excerpt__excerpt\">When faced with sequential decision-making problems, it is often useful to be able to predict what would Annealed Importance Sampling (AIS) with Hamiltonian MCMC can be used to get tight lower bounds on a distribution\u2019s (log) normalization constant. Its main drawback is that it uses non-differentiable transition kernels, which makes tuning its many parameters hard.&hellip; <\/p><\/div>\n<\/li><\/ul><\/div>\n","protected":false},"excerpt":{"rendered":"<p>With 2021 drawing to a close, we would like to take a moment to recognize the wealth of machine learning research produced by the UMass Manning College of Information and Computer Sciences (CICS). This retrospective provides a brief summary of many (not all) of the machine learning papers published by students and\/or faculty in CICS. &hellip; <a href=\"https:\/\/groups.cs.umass.edu\/equate-ml\/machine-learning-retrospective-2021\/\" class=\"more-link\">Continue reading<span class=\"screen-reader-text\"> &#8220;Machine Learning Retrospective, 2021&#8221;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-170","page","type-page","status-publish","hentry","group-blog","hfeed"],"_links":{"self":[{"href":"https:\/\/groups.cs.umass.edu\/equate-ml\/wp-json\/wp\/v2\/pages\/170","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/groups.cs.umass.edu\/equate-ml\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/groups.cs.umass.edu\/equate-ml\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/groups.cs.umass.edu\/equate-ml\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/groups.cs.umass.edu\/equate-ml\/wp-json\/wp\/v2\/comments?post=170"}],"version-history":[{"count":24,"href":"https:\/\/groups.cs.umass.edu\/equate-ml\/wp-json\/wp\/v2\/pages\/170\/revisions"}],"predecessor-version":[{"id":271,"href":"https:\/\/groups.cs.umass.edu\/equate-ml\/wp-json\/wp\/v2\/pages\/170\/revisions\/271"}],"wp:attachment":[{"href":"https:\/\/groups.cs.umass.edu\/equate-ml\/wp-json\/wp\/v2\/media?parent=170"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}