Skip to main content
Home

Main navigation

  • Home
  • Series
  • People
  • Depts & Colleges
  • Open Education

Main navigation

  • Home
  • Series
  • People
  • Depts & Colleges
  • Open Education

Neal Benowitz

No podcasts episodes were found for this contributor.

Angela Wilson

No podcasts episodes were found for this contributor.

Rhodessa Jones

No podcasts episodes were found for this contributor.

Nancy Rabinowitz

No podcasts episodes were found for this contributor.

May 2022 Podcast with Neal Benowitz

Series
Let's talk e-cigarettes
Embed
Jamie Hartmann-Boyce and Nicola Lindson discuss emerging evidence in e-cigarette research and interview Neal Benowitz.
In this episode Associate Professor Jamie Hartmann-Boyce and Dr Nicola Lindson discuss the emerging evidence in e-cigarette research and interview Professor Neal Benowitz. This podcast is a companion to the electronic cigarettes Cochrane living systematic review and shares the evidence from the monthly searches.

Long description
In the May episode Jamie Hartmann-Boyce talks with Neal Benowitz, Emeritus Professor at the University of California San Francisco, Professor Benowitz practices medicine, cardiology and clinical pharmacology and has a particular interest in tobacco as a major risk factor for cardiovascular disease.
Professor Neal Benowitz talks to Associate Professor Jamie Hartmann-Boyce about the toxicological data from studies of e-cigarettes. He stresses the importance of comparing e-cigarette use to combustible cigarette use, as the exposure to biomarkers that we can measure is much lower in people who vape than in people who use combustible cigarettes. Professor Benowitz points out that many e-cigarette users have been long-term combustible cigarette users so it is difficult to separate out the effects of each. He highlights the need for longitudinal studies among people who have only used e-cigarettes and have not used combustible cigarettes. Professor Benowitz also discusses the need to look at the different types of e-cigarettes, there are many different products and toxicity will vary between the different e-cigarette devices.
Jamie and Nicola discuss recent work comparing biomarkers of harm. Exclusive e-cigarette use was associated with lower levels of biomarkers of harm than exclusive use of combustible tobacco, or use of a combination of combustible tobacco. This work was funded by the Oxford University Public Policy Challenge Fund and Cancer Research UK.
Jamie and Nicola also bring us up to date with the literature search conducted on May 1st 2022. The May search found 2 new studies, 3 new ongoing studies and 2 records linked to previously identified studies. We will include the studies we have found in future updates of the Cochrane review.

For more information on the full Cochrane review updated in September 2021 see: https://doi.org/10.1002/14651858.CD010216.pub6 or our webpage https://www.cebm.ox.ac.uk/research/electronic-cigarettes-for-smoking-cessation-cochrane-living-systematic-review-1

Episode Information

Series
Let's talk e-cigarettes
People
Neal Benowitz
Jamie Hartmann-Boyce
Nicola Lindson
Keywords
e-cigarette
Health
tobacco
cardiovascular disease
Department: Centre for Evidence-Based Medicine
Date Added: 01/06/2022
Duration: 00:14:01

Subscribe

Download

The Medea Project: Theatre for Incarcerated People

Series
Reimagining Ancient Greece and Rome: APGRD Podcast
Embed
A podcast episode with Nancy Rabinowitz, Rhodessa Jones, and Angela Wilson
Rhodessa Jones is a theatre practitioner and artistic director, and founded The Medea Project: Theater for Incarcerated Women/HIV Circle in 1989. Nancy Rabinowitz is Professor Emerita of Comparative Literature at Hamilton College, and has worked extensively on the impact of Greek theatre in prisons, and co-edited Classics and Prison Education in the US (2021). Angela Wilson is a formerly incarcerated mother, writer, actress, teacher, activist, and a core member of the Medea Project. The three discuss the Medea Project's origins, latest residency, and their engagement with myth as a ritual of resilience.

Episode Information

Series
Reimagining Ancient Greece and Rome: APGRD Podcast
People
Nancy Rabinowitz
Rhodessa Jones
Angela Wilson
Keywords
Greek theatre
prisons
myth
theatre for incarcerated people
Department: Faculty of Classics
Date Added: 01/06/2022
Duration:

Subscribe

Download

Peter Railton

No podcasts episodes were found for this contributor.

Maria de Goeij

No podcasts episodes were found for this contributor.

2022 Annual Uehiro Lectures in Practical Ethics: Ethics and Artificial Intelligence (3 of 3)

Series
Uehiro Lectures: Practical solutions for ethical challenges
Embed
In last of the three 2022 Annual Uehiro Lectures in Practical Ethics, Professor Peter Railton explores how we might "programme ethics into AI"
Recent, dramatic advancement in the capabilities of artificial intelligence (AI) raise a host of ethical questions about the development and deployment of AI systems. Some of these are questions long recognized as of fundamental moral concern, and which may occur in particularly acute forms with AI—matters of distributive justice, discrimination, social control, political manipulation, the conduct of warfare, personal privacy, and the concentration of economic power. Other questions, however, concern issues that are more specific to the distinctive kind of technological change AI represents. For example, how to contend with the possibility that artificial agents might emerge with capabilities that go beyond human comprehension or control? But whether or when the threat of such “superintelligence” becomes realistic, we are now facing a situation in which partially-intelligent AI systems are increasingly being deployed in roles that involve relatively autonomous decision-making that carries real risk of harm. This urgently raises the question of how such partially-intelligent systems could become appropriately sensitive to moral considerations.

In these lectures I will attempt to take some first steps in answering that question, which often is put in terms of “programming ethics into AI”. However, we don’t have an “ethical algorithm” that could be programmed into AI systems, and that would enable them to respond aptly to an open-ended array of situations where moral issues are stake. Moreover, the current revolution in AI has provided ample evidence that system designs based upon the learning of complex representational structures and generative capacities have acquired higher levels of competence, situational sensitivity, and creativity in problem-solving than systems based upon pre-programmed expertise. Might a learning-based approach to AI be extended to the competence needed to identify and respond appropriately to moral dimensions of situations?

I will begin by outlining a framework for understanding what “moral learning” might be, seeking compatibility with a range of conceptions of the normative content of morality. I then will draw upon research on human cognitive and social development—research that itself is undergoing a “learning revolution”—to suggest how this research enables us to see at work components central to moral learning, and to ask what conditions are favorable to the development and working of these components. The question then becomes whether artificial systems might be capable of similar cognitive and social development, and what conditions would be favorable to this. Might the same learning-based approaches that have achieved such success in strategic game-playing, image identification and generation, and language recognition and translation also achieve success in cooperative game-playing, identifying moral issues in situations, and communicating and collaborating effectively on apt responses? How far might such learning go, and what could this tell us about how we might engage with AI systems to foster their moral development, and perhaps ours as well?

Episode Information

Series
Uehiro Lectures: Practical solutions for ethical challenges
People
Peter Railton
Keywords
philosophy
ethics
AI artificial intelligence
Department: Uehiro Oxford Institute
Date Added: 31/05/2022
Duration: 01:13:36

Subscribe

Download

2022 Annual Uehiro Lectures in Practical Ethics: Ethics and Artificial Intelligence (2 of 3)

Series
Uehiro Lectures: Practical solutions for ethical challenges
Embed
In the second 2022 Annual Uehiro Lectures in Practical Ethics, Professor Peter Railton explores how we might "programme ethics into AI"
Recent, dramatic advancement in the capabilities of artificial intelligence (AI) raise a host of ethical questions about the development and deployment of AI systems. Some of these are questions long recognized as of fundamental moral concern, and which may occur in particularly acute forms with AI—matters of distributive justice, discrimination, social control, political manipulation, the conduct of warfare, personal privacy, and the concentration of economic power. Other questions, however, concern issues that are more specific to the distinctive kind of technological change AI represents. For example, how to contend with the possibility that artificial agents might emerge with capabilities that go beyond human comprehension or control? But whether or when the threat of such “superintelligence” becomes realistic, we are now facing a situation in which partially-intelligent AI systems are increasingly being deployed in roles that involve relatively autonomous decision-making that carries real risk of harm. This urgently raises the question of how such partially-intelligent systems could become appropriately sensitive to moral considerations.

In these lectures I will attempt to take some first steps in answering that question, which often is put in terms of “programming ethics into AI”. However, we don’t have an “ethical algorithm” that could be programmed into AI systems, and that would enable them to respond aptly to an open-ended array of situations where moral issues are stake. Moreover, the current revolution in AI has provided ample evidence that system designs based upon the learning of complex representational structures and generative capacities have acquired higher levels of competence, situational sensitivity, and creativity in problem-solving than systems based upon pre-programmed expertise. Might a learning-based approach to AI be extended to the competence needed to identify and respond appropriately to moral dimensions of situations?

I will begin by outlining a framework for understanding what “moral learning” might be, seeking compatibility with a range of conceptions of the normative content of morality. I then will draw upon research on human cognitive and social development—research that itself is undergoing a “learning revolution”—to suggest how this research enables us to see at work components central to moral learning, and to ask what conditions are favorable to the development and working of these components. The question then becomes whether artificial systems might be capable of similar cognitive and social development, and what conditions would be favorable to this. Might the same learning-based approaches that have achieved such success in strategic game-playing, image identification and generation, and language recognition and translation also achieve success in cooperative game-playing, identifying moral issues in situations, and communicating and collaborating effectively on apt responses? How far might such learning go, and what could this tell us about how we might engage with AI systems to foster their moral development, and perhaps ours as well?

Episode Information

Series
Uehiro Lectures: Practical solutions for ethical challenges
People
Peter Railton
Keywords
philosophy
ethics
AI artificial intelligence
Department: Uehiro Oxford Institute
Date Added: 31/05/2022
Duration: 01:07:30

Subscribe

Download

Pagination

  • First page
  • Previous page
  • …
  • Page 232
  • Page 233
  • Page 234
  • Page 235
  • Page 236
  • Page 237
  • Page 238
  • Page 239
  • Page 240
  • …
  • Next page
  • Last page

Footer

  • About
  • Accessibility
  • Contribute
  • Copyright
  • Contact
  • Privacy
  • Login
'Oxford Podcasts' X Account @oxfordpodcasts | Upcoming Talks in Oxford | © 2011-2026 The University of Oxford