エピソード

  • Conference: Trust and the Ethics of AI
    2022/07/08
    This workshop aims to address some of the insights that we have gained about the ethics of AI and the concept of trust. We critically explore practical and theoretical issues relating to values and frameworks, engaging with carebots, evaluations of decision support systems, and norms in the private sector. We assess the objects of trust in a democratic setting and discuss how scholars can further shift insights from academia to other sectors. Workshop proceedings will appear in a special symposium issue of C4eJournal.net. Speakers: Judith Simon (University of Hamburg), Can and Should We Trust AI? Vivek Nallur (University College Dublin), Trusting a Carebot: Towards a Framework for Asking the Right Questions Justin B. Biddle (Georgia Institute of Technology), Organizational Perspectives on Trust and Values in AI Sina Fazelpour (Northeastern University), Where Are the Missing Humans? Evaluating AI Decision Support Systems in Content Esther Keymolen (Tilburg University), Trustworthy Tech Companies: Talking the Talk or Walking the Walk? Ori Freiman (University of Toronto), Making Sense of the Conceptual Nonsense “Trustworthy AI”: What’s Next?
    続きを読む 一部表示
    3 時間 23 分
  • Conference: Afrofuturism And The Law
    2022/05/19
    Long before the film Black Panther captured the public’s imagination, the cultural critic Mark Dery had coined the term “Afrofuturism” to describe “speculative fiction that treats African-American themes and addresses African-American concerns in the context of twentieth-century technoculture.” Since then, the term has been applied to speculative creatives as diverse as the pop artist Janelle Monae, the science fiction writer Octavia Butler, and the visual artist Nick Cave. But only recently have thinkers turned to how Afrofuturism might guide, and shape, law. The participants in this workshop explore the many ways Afrofuturism can inform a range of legal issues, and even chart the way to a better future for us all. Introduction: Bennett Capers (Law, Fordham) Panel 1: Ngozi Okidegbe (Law, Cardozo), Of Afrofuturism, Of Algorithms Alex Zamalin (Political Science & African American Studies, Detroit Mercy), Afrofuturism as Reconstitution Panel 2: Rasheedah Phillips (PolicyLink), Race Against Time: Afrofuturism and Our Liberated Housing Futures Etienne C. Toussaint (Law, South Carolina), For Every Rat Killed
    続きを読む 一部表示
    1 時間 16 分
  • Nathan Olmstead, We Are All Ghosts: Sidewalk Toronto
    2022/04/13
    As the fabric of the city becomes increasingly fibreoptic, enthusiasm for the speed and ubiquity of digital infrastructure abounds. From Toronto to Abu Dhabi, new technologies promise the ability to observe, manage, and experience the city in so-called real-time, freeing cities from the spatiotemporal restrictions of the past. In this project, I look at the way this appreciation for the real-time is influencing our understanding of the datafied urban subject. I argue that this dominant discourse locates digital infrastructure within a broader metaphysics of presence, in which instantaneous data promise an unmediated view of both the city and those within it. The result is a levelling of residents along an overarching, linear, and spatialized timeline that sanitizes the temporal and rhythmic diversity of urban spaces. This same levelling effect can be seen in contemporary regulatory frameworks, which focus on the rights or sovereignty of a largely atomized urban subject removed from its spatiotemporal context. A more equitable alternative must therefore consider the temporal diversity, relationality, and inequality implicit within the datafied city, an alternative I begin to ground in Jacques Derrida’s notion of the spectre. This work is conducted through an exploration of Sidewalk Labs pioneering use of term urban data during their foray in Toronto, which highlights the potentiality of alternative, spectral data governance models at the same time it reflects the limitations of existing frameworks. Nathan Olmstead Urban Studies University of Toronto
    続きを読む 一部表示
    31 分
  • Kamilah Ebrahim & Erina Moon, Building Algorithms that Work for Everyone
    2022/04/01
    Oftentimes, the development of algorithms are divorced from the environments where they will eventually be deployed. In high stakes contexts, like child welfare services, policymakers and technologists must exercise a high degree of caution in the design and deployment of decisionmaking algorithms or risk further marginalising already vulnerable communities. This talk will seek to explain the status quo of child welfare algorithms, what we miss when we fail to include context in the development of algorithms, and how the addition of qualitative text data can help to make better algorithms. Kamilah Ebrahim iSchool University of Toronto Erina Moon iSchool University of Toronto
    続きを読む 一部表示
    20 分
  • Sharon Ferguson, Increasing Diversity In Machine Learning And Artificial Intelligence
    2022/03/23
    Machine Learning and Artificial Intelligence are powering the applications we use, the decisions we make, and the decisions made about us. We have already seen numerous examples of what happens when these algorithms are designed without diversity in mind: facial recognition algorithms, recidivism algorithms, and resume reviewing algorithms all produce non-equitable outcomes. As Machine Learning (ML) and Artificial Intelligence (AI) expand into more areas of our lives, we must take action to promote diversity among those working in this field. A critical step in this work is understanding why some students who choose to study ML/AI later leave the field. In this talk, I will outline the findings from two iterations of survey-based studies that start to build a model of intentional persistence in the field. I will highlight the findings that suggest drivers of the gender gap, review what we’ve learned about persistence through these studies, and share open areas for future work. Sharon Ferguson Industrial Engineering University of Toronto
    続きを読む 一部表示
    46 分
  • Julian Posada, The Coloniality Of Data Work For Machine Learning
    2022/03/16
    Many research and industry organizations outsource data generation, annotation, and algorithmic verification—or data work—to workers worldwide through digital platforms. A subset of the gig economy, these platforms consider workers independent users with no employment rights, pay them per task, and control them with automated algorithmic managers. This talk explores how the coloniality of data work is characterized by an extractivist method of generating data that privileges profit and the epistemic dominance of those in power. Social inequalities are reproduced through the data production process, and local worker communities mitigate these power imbalances by relying on family members, neighbours, and colleagues online. Furthermore, management in outsourced data production ensures that workers’ voices are suppressed in the data annotation process through algorithmic control and surveillance, resulting in datasets generated exclusively by clients, with their worldviews encoded in algorithms through training. Julian Posada Faculty of Information University of Toronto
    続きを読む 一部表示
    47 分
  • Tom Yeh & Benjamin Walsh, Is AI Creepy Or Cool Teaching Teens About AI And Ethics
    2022/03/16
    Teens have different attitudes toward AI. Some are excited by AI’s promises to change their future. Some are afraid of AI’s problems. Some are indifferent. There is a consensus among educators that AI is a “must-teach” topic for teens. But how? In this talk, we will share our experiences and lessons learned from the Imagine AI project, funded by the National Science Foundation and advised by the Center for Ethics (C4E). Unlike other efforts focusing on AI technologies, Imagine AI takes a unique approach by focusing on AI ethics. Since 2019, we have partnered with more than a dozen teachers to teach hundreds of students in different classrooms and schools about AI ethics. We tried a variety of pedagogies and tested a range of AI ethics topics to understand their relative effectiveness to educate and abilities to engage. We found promising opportunities, such as short stories, as well as tensions. Our short stories are original, centering on young protagonists, and contextualizing ethical dilemmas in scenarios relatable to teens. We will share what stories are more engaging than the others, how teachers are using the stories in classrooms, and how students are responding to the stories. Moreover, we will discuss the tensions we identified. For students, there is a tension of balance: how can we teach AI ethics without inducing a chilling effect? For teachers, there is a tension of authority: which teacher, a social study teacher well-versed in social issues, a science teacher skilled in modern technology, or an English literacy teacher experienced in discussing dilemmas and critical thinking, would be the most authoritative to teach about AI ethics? Another tension is urgency: while teachers agree AI ethics is an urgent topic because of AI’s far-reaching influence on teens’ future, they struggle to meet teens’ even more urgent and immediate needs such as social-emotional issues worsened by the pandemic, interruption of education, loss of housing, and even school shootings. Is now really a good time to talk about AI ethics? But if not now, when? We will discuss the implications of these tensions and potential solutions. We will conclude with a call for action for experts on AI and ethics to partner with educators to help our future generations “imagine AI.” Tom Yeh Computer Science University of Colorado Benjamin Walsh Education University of Colorado
    続きを読む 一部表示
    58 分
  • Mishall Ahmed, Difference Centric Yet Difference Transcended
    2022/03/04
    Developed along existing asymmetries of power, AI and its applications further entrench, if not exacerbate social, racialized, and gendered inequalities. As critical discourse grows, scholars make the case for the deployment of ethics and ethical frameworks to mitigate harms disproportionately impacting marginalized groups. However, there are foundational challenges to the actualization of harm reduction through a liberal ethics of AI. In this talk I will highlight the foundational challenges posed to goals of harm reduction through ethics frameworks and its reliance on social categories of difference. Mishall Ahmed Political Science York University
    続きを読む 一部表示
    31 分