{"id":21,"date":"2019-01-31T21:38:46","date_gmt":"2019-01-31T21:38:46","guid":{"rendered":"http:\/\/groups.cs.umass.edu\/equate\/?p=21"},"modified":"2019-03-26T20:08:43","modified_gmt":"2019-03-26T20:08:43","slug":"engineering-fair-systems","status":"publish","type":"post","link":"https:\/\/groups.cs.umass.edu\/equate\/research\/engineering-fair-systems","title":{"rendered":"Engineering Fair Systems"},"content":{"rendered":"<p><span style=\"font-weight: 400\">Many diverse factors can cause software bias, including poor design,\u00a0<\/span><span style=\"font-weight: 400\">implementation bugs, unintended component interactions, and the use of unsafe\u00a0<\/span><span style=\"font-weight: 400\">algorithms or biased data. <\/span><span style=\"font-weight: 400\">Our work focuses on using the engineering process to improve\u00a0<\/span><span style=\"font-weight: 400\">software fairness. For example, tools can help domain experts specify\u00a0<\/span><span style=\"font-weight: 400\">fairness properties and detect inconsistencies among those requirements; they\u00a0<\/span><span style=\"font-weight: 400\">can automatically generate test suites to measure software bias to identify\u00a0<\/span><span style=\"font-weight: 400\">bias in black-box systems even when the system&#8217;s source code and the data\u00a0<\/span><span style=\"font-weight: 400\">used to train it are unavailable; they can help developers and data\u00a0<\/span><span style=\"font-weight: 400\">scientists debug causes of bias, both in the source code and the data; and\u00a0<\/span><span style=\"font-weight: 400\">they can formally verify fairness properties in the implementation. Our work\u00a0<\/span><span style=\"font-weight: 400\">in engineering fair systems combines research in software engineering with\u00a0<\/span><span style=\"font-weight: 400\">machine learning, vision, natural language processing, and theoretical\u00a0<\/span><span style=\"font-weight: 400\">computer science to create tools that help build more fair systems.<\/span><!--more--><\/p>\n<h3>Publications<\/h3>\n<ul>\n<li><span style=\"font-weight: 400\">Yuriy Brun and Alexandra Meliou, <a href=\"https:\/\/people.cs.umass.edu\/~brun\/pubs\/pubs\/Brun18fse-nier.pdf\">Software Fairness<\/a>, in Proceedings of the New\u00a0<\/span><span style=\"font-weight: 400\">Ideas and Emerging Results Track at the 26th ACM Joint European Software\u00a0<\/span><span style=\"font-weight: 400\">Engineering Conference and Symposium on the Foundations of Software\u00a0<\/span><span style=\"font-weight: 400\">Engineering (ESEC\/FSE), 2018, pp. 754-759. <\/span><\/li>\n<li>Rico Angell, Brittany Johnson, Yuriy Brun, and Alexandra Meliou, <a href=\"https:\/\/people.cs.umass.edu\/~brun\/pubs\/pubs\/Angell18demo-fse.pdf\">Themis: Automatically Testing Software for Discrimination<\/a>, in Proceedings of the Demonstrations Track at the 26th ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC\/FSE), 2018, pp. 871-875.<\/li>\n<li>Sainyam Galhotra, Yuriy Brun, and Alexandra Meliou, <a href=\"https:\/\/people.cs.umass.edu\/~brun\/pubs\/pubs\/Galhotra17fse.pdf\">Fairness Testing: Testing Software for Discrimination<\/a>, in Proceedings of the 11th Joint Meeting of the European Software Engineering Conference and ACM SIGSOFT Symposium on the Foundations of Software Engineering (ESEC\/FSE), 2017, pp. 498-510.<br \/>\nWinner of an ACM SIGSOFT Distinguished Paper Award<\/li>\n<li>Rico Angell, Brittany Johnson, Sainyam Galhotra, Yuriy Brun, and Alexandra Meliou, <a href=\"https:\/\/people.cs.umass.edu\/~brun\/pubs\/pubs\/Angell18demo-fse.pdf\" target=\"_blank\" rel=\"noopener\">Themis: Automated Test Suite Generator for Testing Software for Fairness<\/a><\/li>\n<li><a href=\"https:\/\/people.cs.umass.edu\/~brun\/video\/Fairness\/FairnessTesting.mp4\">Testing Software for Discrimination video<\/a><\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"<p>Many diverse factors can cause software bias, including poor design,\u00a0implementation bugs, unintended component interactions, and the use of unsafe\u00a0algorithms or biased data. Our work focuses on using the engineering process to improve\u00a0software fairness. For example, tools can help domain experts specify\u00a0fairness properties and detect inconsistencies among those requirements; they\u00a0can automatically generate test suites to measure [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[2],"tags":[],"class_list":["post-21","post","type-post","status-publish","format-standard","hentry","category-research"],"_links":{"self":[{"href":"https:\/\/groups.cs.umass.edu\/equate\/wp-json\/wp\/v2\/posts\/21","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/groups.cs.umass.edu\/equate\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/groups.cs.umass.edu\/equate\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/groups.cs.umass.edu\/equate\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/groups.cs.umass.edu\/equate\/wp-json\/wp\/v2\/comments?post=21"}],"version-history":[{"count":9,"href":"https:\/\/groups.cs.umass.edu\/equate\/wp-json\/wp\/v2\/posts\/21\/revisions"}],"predecessor-version":[{"id":129,"href":"https:\/\/groups.cs.umass.edu\/equate\/wp-json\/wp\/v2\/posts\/21\/revisions\/129"}],"wp:attachment":[{"href":"https:\/\/groups.cs.umass.edu\/equate\/wp-json\/wp\/v2\/media?parent=21"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/groups.cs.umass.edu\/equate\/wp-json\/wp\/v2\/categories?post=21"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/groups.cs.umass.edu\/equate\/wp-json\/wp\/v2\/tags?post=21"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}