Elizabeth Buchanan, PhD
Marshfield Clinic Research Institute
(presentation available here)
Elizabeth Buchanan PhD is Director of the Office of Research Support Services and Senior Research Scientist at the Marshfield Clinic Research Institute. For over twenty years, Elizabeth’s scholarship has focused on research ethics, compliance and regulations, specifically around Internet, social media, and big data research. In these areas, she has written guidelines for IRBs/REBs, contributed to the Secretary’s Advisory Committee to the Office of Human Research Protections (SACHRP) in 2013, and was co-author to the 2012 Association of Internet Researchers Ethics Guidelines. Elizabeth serves as faculty at the Fordham University’s Research Ethics Training Institute (RETI), Associate Editor, Journal of Empirical Research on Human Research Ethics, Board Member for Public Responsibility in Medicine and Research (PRIM&R), and Board Member and Secretary, Open Door Free Clinic, a community resource Chippewa Falls, Wisconsin. Prior to joining Marshfield Clinic Research Institute, she was Endowed Chair in Ethics at the University of Wisconsin-Stout.
Prof. Mikołaj Morzy
Poznan University of Technology
Abstract: “Machine learning pitfalls, or what could go wrong?”
Machine learning models deployed in production begin to play crucial role in many areas of contemporary life. Along with their widespread adoption there is a growing concern about possible biases and discrimination that these models can introduce. Over the last years we have witnessed many examples of machine learning models going awry, due to inadequate data selection strategies, wrong choice of objective functions, unclear relationship between model metrics and more general business objectives, etc. In this talk I will present, in broad terms, the entire pipeline of machine learning model design, development, deployment and evaluation. I will show examples of bad engineering practices that may lead to biased models, and I will discuss methods for mitigating some of the risks of widespread deployment of machine learning models.
Mikolaj Morzy is an Associate Professor at the Faculty of Computing at Poznan University of Technology (PUT). His research interests focus on machine learning and its applications in natural language processing, complex network systems and social networks. He is currently holding the post of the deputy dean for science at the Faculty of Computing at PUT, being responsible for scientific outreach, supervision of scientific projects conducted at the faculty, and managing doctoral studies.
He has helped to organize several editions of the Machine Learning Meetup in Poznan, a vivid community of over 600 machine learning practitioners and enthusiasts. He is also active in the field of science popularization, in recent years he has been invited to give lectures on machine learning during two TEDx events in Poznan and Bydgoszcz. In the past he has been working at the Westfalische-Wilhelms Universitaet Muenster (Germany) and Loyola University New Orleans (USA).
Jan Piasecki, PhD
Jagiellonian University Medical College
Abstract: “Ethical Framework for Webimmunization Score on Twitter”
(presentation script available here)
Webimmunization is defined as individual or group susceptibility to misinformation on social media (e.g. Twitter). The main goal of the #webimmunization project is to implement machine learning models that would allow us to predict the individual and online community’s webimmunization score based on their activity on Twitter. Achieving this goal is possible only if one uses a massive amount of data collected through Twitter API services. Therefore, we wish to establish the most relevant ethical framework for the #webimmunization project before we undertake research activities. I will discuss four possible ethical frameworks that can inform us about the ethical challenges of the #webimmunization project: research ethics and information ethics. Then I will address the specific challenges of our research project and discuss the possible measures mitigating risks for the participants and the research team.
Jan Piasecki is a bioethicist with a background in philosophy. He has been conducting research and teaching at the Medical College of Jagiellonian University since 2012. His research is mainly focused on the challenges of conducting biomedical research involving human beings. He studies regulations concerning research with humans and examines their conceptual framework and ethical justification. He has been concentrating on issues such as research with minors, epidemiological studies, the concept of learning healthcare systems, and the use of electronic health records in research.
Nicholas Proferes, PhD
Arizona State University
Abstract: “Ethics for Twitter Research Beyond Review Boards”
(presentation available here)
Most researchers would likely agree they want their work to be ethical. After all, who is going to defend unethical work? But there are numerous challenges moving from a fuzzy feel-good buzzword to the brass-tacks of operationalizing ethics as part of one’s research practice. How should we navigate the gap between what we feel is morally correct and what institutional policies might require? How do we weigh values such as respecting individual autonomy, against other values that we might care about, such as sharing research and knowledge? This talk examines ethical considerations in Twitter research that go beyond the scope of ethics review boards, including researcher responsibilities when using publicly available data, the gap between what researchers do and what users expect, and ethical data sharing.
Nicholas Proferes is an Assistant Professor at Arizona State University’s School of Social and Behavioral Sciences. His research interests include users’ understandings of socio-technical systems such as social media, societal discourse about technology, and issues of power and ethics in the digital landscape. As part of this research agenda, he has studied Twitter users’ understandings of information flows and how that knowledge impacts the choices users make on the platform; how tech leaders such as Mark Zuckerberg use strategic language choices to position their technologies in society; and how communities of academic researchers are conceptualizing information ethics and the impact of that on their research practices.