Opening AI:
the next Open Science Frontier
Tuesday, Sept 21 | 14.45 - 16.00 CEST

A round-table discussion

Moderator: Yannis Ioannidis

Professor of Informatics, University of Athens| Affiliate faculty, Athena Research & Innovation Center | Head, OpenAIRE

Yannis Ioannidis has served as the President and General Director of the “Athena” Research and Innovation Center for 10 years (2011-2021). His research interests include Database and Information Systems, Data Science, Recommender Systems and Personalization, Data Infrastructures, and Human-Computer Interaction, topics on which he has published over 160 articles in leading journals and conferences and also holds three patents. His work is often inspired by and applied to data management and analysis problems that arise in industrial environments or in the context of other scientific fields (Life Sciences, Physical Sciences, Social Sciences, Humanities) and the Arts. He is the chair of the EB of OpenAIRE which implements the European policies on open access to research publications and data, and is the Software Director of the Human Brain Project flagship initiative. He has also led or is currently leading the creation of new international or spin-off companies. He is an ACM and IEEE Fellow, a member of Academia Europaea, and a recipient of the ACM SIGMOD Contributions Award and several other research and teaching awards. He is also a vice chair of the European Strategy Forum on Research Infrastructures (ESFRI), the Greek delegate to ESFRI and a member of its Executive Board and a member of the strategic management board of the Greek hub of the UN Sustainable Development Solutions Network.

The objective of this panel is to explore the boundaries of the openness of Artificial Intelligence (AI) applications, and how it fits into the open science initiative.

Legislation has attempted to tackle the problem in the context of introducing provisions that allow access to algorithms and their workings, such as in the case of the General Data Protection Regulation. Artificial Intelligence presents substantial issues of explicability and predictability and hence, transparency and accountability.

The aim of the discussion is two fold: (i) to explain key challenges in the use of AI in Education, Research, Health and Government and what a framework for an Ethical AI would include. And most importantly how it fits the open science/FAIR world, and (ii) to present the key concepts of openness in software, data and services and then explore technical, legal and ethical constructs that could allow AI openness.

Panelists

Joan Donovan

Research Director of the Shorenstein Center on Media, Politics and Public Policy, Harvard Kennedy School
Dr. Donovan's research and teaching interests are focused on media manipulation, effects of disinformation campaigns, and adversarial media movements. She teaches a graduate-level course on Media Manipulation and Disinformation Campaigns (DPI-622) with a focus on how social movements, political parties, governments, corporations, and other networked groups engage in active efforts to shape media narratives and disrupt social institutions.
Dr. Donovan's research can be found in academic peer-reviewed journals such as Social Media + Society, Journal of Contemporary Ethnography (JCE), Information, Communication & Society, Social Studies of Science, and Online Information Review. Her contributions can also be found in the books, Data Science Landscape: Towards Research Standards and Protocols and Unlike Us Reader: Social Media Monopolies and Their Alternatives. Dr. Donovan's research and expertise has been showcased in a wide array of media outlets including NPR, Washington Post, The New York Times, Rolling Stone, The Atlantic, and more.
Prior to joining Harvard Kennedy School, Dr. Donovan was the Research Lead for Data & Society’s Media Manipulation Initiative, where she led a large team of researchers studying efforts to manipulate sociotechnical systems for political gain. She continues to hold an affiliate appointment with Data & Society. Dr. Donovan received her Ph.D. in Sociology and Science Studies from the University of California San Diego, and was a postdoctoral fellow at the UCLA Institute for Society and Genetics, where she studied white supremacists’ use of DNA ancestry tests, social movements, and technology.

Michalis Kritikos

Policy Analyst at the European Parliament (legal/ethics advisor on Science & Technology)
Dr Mihalis Kritikos is a Policy Analyst at the European Parliament working as a legal/ethics advisor on Science and Technology issues (STOA/EPRS). Mihalis is a legal and ethical expert in the fields of AI, blockchain and gene editing, the responsible governance of science and innovation and the regulatory control of new and emerging risks.
He has worked as a Research Programme Manager for the Ethics Review Service of the European Commission, as a Senior Associate in the EU Regulatory and Environment Affairs Department of White and Case (Brussels office), as a Lecturer at several UK Universities and as a Lecturer/Project Leader at the European Institute of Public Administration (EIPA). He also taught EU Law and Institutions for several years at the London School of Economics and Political Science (LSE).
Dr Kritikos holds a Bachelor in Law (Athens Law School), Master degrees in European and International Environmental Law and Environmental Management (University of Athens and EAEME respectively) and a PhD in EU Technology Law and Risk Regulation (London School of Economics-LSE).

Natasa Milic-Frayling

CEO Intact Digital | Professor Emerita, University of Nottingham

Dr Natasa Milic-Frayling is a Founder and CEO of Intact Digital Ltd, a company that provides a platform and services for hosting legacy software installations to enable long-term digital data readability and use. Intact Digital works with highly regulated sectors such as Pharma and Life Sciences to support compliance with the data integrity regulations, reconstruction of research studies and reproducibiity of data analyses.

Natasa has 25 years of experience in computer science research and innovation, including 17 years at Microsoft Research. She authored over 100 research publications and has a dozen of approved patents to her name. She is Professor Emerita at the University of Nottingham where she spent 5 years serving as Chair of Data Science and contributing to the University research strategy on Data Science and AI.  

Natasa is actively engaged with a broader professional community on critical issues that arise from inter-disciplinary use of digtial technologies ranging from professional ethic, privacy and design transparency to digital obsolescence and responsible innovation. She served as a member of theAssociation for Computing Europe Council and as Chair of ACM Women Europe. She is an active member of the Preservation Sub-Committee within the UNESCO Memory of the World Programmeand serves as Chair of the Research and Technology Working group for the UNESCO PERSIST project.

Alex Wade

Director, Strategic Partnerships at Allen Institute for AI (AI2), USA

Alex Wade has recently joined the Allen Institute for AI (AI2) as Director of Strategic Partnerships. Previous he worked with the Chan Zuckerberg Initiative as technical program manager for Meta and served as the Director for Scholarly Communication for Microsoft Research, focused on Microsoft Academic, a semantic knowledge graph of academic research publications, people, and institutions. During his career at Microsoft, Wade managed the corporate search and taxonomy management services and served as Senior Program Manager for Windows Search.

Prior to joining Microsoft, he held Systems Librarian, Engineering Librarian, and Philosophy Librarian, and technical library positions at the University of Washington, the University of Michigan, and the University of California, Berkeley.

Alex holds a bachelor's degree in Philosophy from the University of California, Berkeley, and a Master of Librarianship degree from the University of Washington.

This email address is being protected from spambots. You need JavaScript enabled to view it.