How We Can Design Autonomous Systems for Values with Steven Umbrello


Manage episode 292284917 series 2512650
Av David Yakobovitch oppdaget av Player FM og vårt samfunn — opphavsrett er eid av utgiveren, ikke Plaer FM, og lyd streames direkte fra deres servere. Trykk på Abonner knappen for å spore oppdateringer i Player FM, eller lim inn feed URLen til andre podcast apper.

How We Can Design Autonomous Systems for Values with Steven Umbrello

Show notes:

  • Steven Umbrello is the Managing Director at the Institute for Ethics and Emerging Technologies with a research focus on responsible innovation and the ethical design methods for emerging technologies. His work focuses on ethics and design thinking around building AI systems, and how policy can shape the future of these autonomous systems.
  • Ethics clarification of what would normally be abstract, philosophical concepts like human values to engineers can be implemented into design requirements. Design has to be approached so that engineering can incorporate human values, which are often abstract, into technological design.
  • The difficulty with AI and with many technologies in a globalized world is that technology can be developed in X, but unfortunately, that technology has cross-cultural, cross-domain, cross-border impacts. So, it's about trying to incorporate different understandings of values from across the globe into a single technology. These are some of the difficulties that designers are facing right now.
  • Technology is not purely deterministic. Nor is society purely constructive and nor is technology purely instrumental. It's just a neutral tool. It doesn't embody any type of values whatsoever. And that really is important, because that means that the decisions that engineers make today, as designers, philosophers, do have a real substantive impact into the future.
  • We can begin to really break down the debate on whether we should ban or not ban autonomous weapon systems. Technological innovations have always played a key role in military operations. And autonomous weapon systems, at least within the last few years are receiving asymmetric attention, both in public and, as well, academic discussions.
  • Scientists should not apologize for, but show the nuance in debate that level five autonomy in and of itself is not the problematic point of interest, but rather what type of system has this level five autonomy. There's all these assessments. This is the nuance in the debate.
  • For those who are interested in the philosophical foundations of meaningful human control, or even value sensitive design more generally you can find my work on my website and my social media. If people are interested in following the debate on the prohibition of AWS they can watch many of the online multilateral meetings, both hosted by the UN and outside their auspices as they take place. People can check out Human Rights Watch and the Campaign to Stop Killer Robots for news on these events.

Show notes Links:

About HumAIn Podcast

The HumAIn Podcast is a leading artificial intelligence podcast that explores the topics of AI, data science, future of work, and developer education for technologists. Whether you are an Executive, data scientist, software engineer, product manager, or student-in-training, HumAIn connects you with industry thought leaders on the technology trends that are relevant and practical. HumAIn is a leading data science podcast where frequently discussed topics include ai trends, ai for all, computer vision, natural language processing, machine learning, data science, and reskilling and upskilling for developers. Episodes focus on new technology, startups, and Human Centered AI in the Fourth Industrial Revolution. HumAIn is the channel to release new AI products, discuss technology trends, and augment human performance.

Advertising Inquiries:
Privacy & Opt-Out:

103 episoder