class: center, middle, inverse, title-slide .title[ # AI and Ethics II: A Glimpse into the Future ] .author[ ### Thierry Warin, PhD ] --- class: inverse, center, middle ![](https://miro.medium.com/max/1400/1*04LuhIgfDILIizW3kUGXIQ.jpeg) --- ### Outline 1. Case Studies 2. AI and Ethics 3. Fun Projects 4. Conclusion --- ### A great conversation on consciousness - https://twitter.com/ilyasut/status/1491554478243258368 - https://towardsdatascience.com/openais-chief-scientist-claimed-ai-may-be-conscious-and-kicked-off-a-furious-debate-7338b95194e - https://openai.com/dall-e-2/ --- class: inverse, center, middle # 1. Case studies --- class: inverse, center, middle # Case study 1: People Analytics --- ### AI and Ethics: Biases in the data > why AI matters for high-stake decisions... .panelset[ .panel[.panel-name[AI in HR] <img src="./images/recruiting1.png" width="350px" style="display: block; margin: auto;" /> ] .panel[.panel-name[Hire Vue] .pull-left[ We could also cite the company called "hire vue" using smartphone technology to conduct video-based interviews. We could also refer to the High school exams in the UK: https://www.cgdev.org/blog/testing-times-exams-debacle-uk-what-covid-19-has-meant-high-stakes-exams-around-world ] .pull-right[ <img src="./images/recruiting2.png" width="350px" style="display: block; margin: auto;" /> ] ] .panel[.panel-name[Amazon] .pull-left[ We could cite here the secret AI recruiting tool from Amazon, or the ad algorithm from Facebook (Lambrecht and Tucker 2018 https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2852260). ] .pull-right[ <img src="./images/recruiting3.png" width="250px" style="display: block; margin: auto;" /> ] ] ] --- ### AI and Ethics: Biases in the data .panelset[ .panel[.panel-name[Gender bias] According to the Gender Shades project at the MIT Media Lab: - "The deeper we dig, the more remnants of bias we will find in our technology." - "We cannot afford to look away this time because the stakes are simply too high." - "We risk losing the gains made with the civil rights movement and women's movement under the false assumption of machine neutrality." - "Automated systems are not inherently neutral." > They reflect the priorities, preferences, and prejudices—the coded gaze—of those who have the power to mold artificial intelligence" (Buolamwini 2018). <https://www.media.mit.edu/projects/gender-shades/overview/> ] .panel[.panel-name[video] <iframe width="700" height="400" src="https://www.youtube.com/embed/TWWsW1w-BVo" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe> ] ] --- class: inverse, center, middle # Case study 2: Google --- ### Google Duplex During Google's IO 2018 developer conference, CEO Sundar Pichai showed off a demo for Google Duplex, an AI assistant that can call and book appointments on your behalf. Google has started to roll out the system in the summer of 2019 for professional clients. Initially, the duplex assistant has been restricted to a narrow range of simple tasks like booking appointments. --- class: center, middle <iframe width="800" height="500" src="https://www.youtube.com/embed/fBVCFcEBKLM" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe> --- class: inverse, center, middle <img src="./images/gduplex1.png" width="100%" style="display: block; margin: auto;" /> --- class: inverse, center, middle <img src="./images/gduplex2.png" width="100%" style="display: block; margin: auto;" /> --- ### Bad research question: Deception by design Google’s experiments do appear to have been designed to deceive: - can you distinguish this from a real person? > In this case it’s unclear why their hypothesis was about deception and not the user experience… You don’t necessarily need to deceive someone to give them a better user experience by sounding naturally. --- ### Bad research question: Deception by design Why did Google chose this way? The phrase "The Turing Test" is most properly used to refer to a proposal made by Turing (1950) as a way of dealing with the question whether machines can think. According to Turing, the question whether machines can think is itself "too meaningless" to deserve discussion. However, if we consider the more precise—and somehow related—question whether a digital computer can do well in a certain kind of game that Turing describes ("The Imitation Game"), then—at least in Turing’s eyes—we do have a question that admits of precise discussion. Moreover, as we shall see, Turing himself thought that it would not be too long before we did have digital computers that could "do well" in the Imitation Game. The phrase "The Turing Test" is sometimes used more generally to refer to some kinds of behavioural tests for the presence of mind, or thought, or intelligence in putatively minded entities. So, for example, it is sometimes suggested that The Turing Test is prefigured in Descartes’ Discourse on the Method. --- ### Bad research question: Deception by design And this at a time when platform-fueled AI problems, such as algorithmically fenced fake news, have snowballed into huge and ugly global scandals with very far reaching societal implications indeed — be it election interference or ethnic violence. You really have to wonder what it would take to shake the ‘first break it, later fix it’ ethos of some of the tech industry’s major players… In short, deception is not cool. Not in humans. And absolutely not in the AIs that are supposed to be helping us. --- <img src="./images/google1.png" width="100%" style="display: block; margin: auto;" /> --- class: center, middle <iframe width="800" height="500" src="https://www.youtube.com/embed/iyiOVUbsPcM" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe> --- .panelset[ .panel[.panel-name[War of the Worlds] .pull-left[ On October 30, 1938 Orson Welles conducted a live broadcast of the H.G. Wells classic science fiction story ‘War Of The Worlds’. The novel was originally written in 1897, but Welles made some changes in his radio broadcast to modernize it. ] .pull-right[ <img src="./images/orson_welles.jpeg" width="250px" style="display: block; margin: auto;" /> ] .panel[.panel-name[Radio talk] > "This is Orson Welles, ladies and gentlemen, out of character to assure you that The War of The Worlds has no further significance than as the holiday offering it was intended to be. The Mercury Theatre’s own radio version of dressing up in a sheet and jumping out of a bush and saying Boo! Starting now, we couldn’t soap all your windows and steal all your garden gates by tomorrow night. . . so we did the best next thing. We annihilated the world before your very ears, and utterly destroyed the C. B. S. You will be relieved, I hope, to learn that we didn’t mean it, and that both institutions are still open for business. So goodbye everybody, and remember the terrible lesson you learned tonight. That grinning, glowing, globular invader of your living room is an inhabitant of the pumpkin patch, and if your doorbell rings and nobody’s there, that was no Martian. . .it’s Hallowe’en." ] ] ] --- class: inverse, center, middle # Case study 3: Deep Fakes --- class: center, middle <iframe width="800" height="500" src="https://www.youtube.com/embed/kEtiajHLmQY" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe> --- .panelset[ .panel[.panel-name[Deep Fakes] <img src="./images/deepfake1.png" width="100%" style="display: block; margin: auto;" /> ] .panel[.panel-name[You can do it!] - [[Deep Nostalgia]](https://www.myheritage.com/deep-nostalgia?utm_source=organic_blog&utm_medium=blog&utm_campaign=web&tr_funnel=web&tr_country=US&tr_creative=deep_nostalgia&utm_content=deep_nostalgia) - <https://github.com/llSourcell/deepfakes> - <https://blogs.rstudio.com/ai/posts/2020-08-18-deepfake/> ] ] --- class: inverse, center, middle # 2. AI and Ethics: Lessons --- ### AI and Ethics 1. Try https://coveryourtracks.eff.org/ 2. Try https://haveibeenpwned.com/Passwords 3. https://haveibeenpwned.com/PwnedWebsites --- ### AI and Ethics: Lessons **Definition** The IEEE technical professional association put out a first draft of a framework to guide ethically designed AI systems at the back end of 2016 — which included general principles such as the need to ensure AI respects human rights, operates transparently and that automated decisions are accountable. In the same year the UK’s BSI standards body developed a specific standard — BS 8611 Ethics design and application robots — which explicitly names identity deception (intentional or unintentional) as a societal risk, and warns that such an approach will eventually erode trust in the technology. - "Avoid deception due to the behaviour and/or appearance of the robot and ensure transparency of robotic nature," the BSI’s standard advises. - "Avoid unnecessary anthropomorphization," is the standard’s general guidance, with the further steer that the technique be reserved" only for well-defined, limited and socially-accepted purposes". (Tricking workers into remotely conversing with robots probably wasn’t what they were thinking of.) --- ### AI and Ethics: Lessons The standard also urges "clarification of intent to simulate human or not, or intended or expected behaviour". > So, yet again, don’t try and pass your bot off as human; you need to make it really clear it’s a robot. - Another contentious subject is whether forming an **emotional bond** with a robot is desirable, especially if the voice assistant interacts with the elderly or children. - Which means they can also control how it is used, and in what contexts — and they can also guarantee it will only be used with certain safeguards built in. --- ### AI and Ethics: Lessons To understand fairness, we can mobilize a number of elements, which can be categorized in two groups. The first group is about data "DNA" and the second group about the model design. - The data "DNA" makes reference to the potential biases that are inherent to dealing with data. Data issues can take the form of data collection issues or data preparation issues (Suresh et Guttag 2020). - The second group focused on the potential issues related to the model design and the results it generates. Model issues can take the form of model development issues, model development issues, model postprocessing issues and model deployment issues (Suresh et Guttag 2020). Several works have studied how to create fairer algorithms, and benchmarked discrimination in various contexts (Hardt, Price, et Srebro 2016; Kilbertus et al. 2017). --- ### AI and Ethics: Lessons As we have seen before, the challenges for fairness in data are numerous. - data as a mirror of status quo - is my data representative of my subject population? - pre-processing, model development, labels, feature engineering - disparate sample sizes across groups - censoring in data collection (sample bias) - different statistical pattern in groups - lack of transparency, lack of monitoring, people not in loop - application context It is thus important to determine how to evaluate fairness. - is it enough to be better than typical human decision making? - do we know mistakes made by people? --- ### AI and Ethics: Lessons - **Group unaware:** What's fair is to totally disregard the gender mix of the applicants who are given loans. for instance. - **Group thresholds:** Because of historical biases reflected in the data used to create the system's model, women can look less loan-worthy than men. So, we should be able to adjust the confidence thresholds for men and women independently. - **Demographic parity:** If the goal is for the two groups to receive the same number of loans, then a natural criterion is demographic parity, where the bank uses loan thresholds that yield the same fraction of loans to each group. The "positive rate" is the same across both groups. - **Equal opportunity:** Here, the constraint is that of the people who can pay back a loan, the same fraction in each group should actually be granted a loan. - **Equal accuracy:** The percentage of right classifications (as loan-worthy or as not) should be the same for all genders. --- ### AI and Ethics: Lessons So, limitations exist, notably the blind spots in ML: - blind spots of a predictive model are those regions in the feature space for which the model makes confident prediction but is incorrect - they arise due to mismatch between training data - algorithms can never self start and detect blind spots, human input is very essential - human driven approach: strategically incentivize people to identify data points and their characteristics where the model is deployed --- ### Zelenskyy [![](https://media.npr.org/assets/img/2022/03/16/ap22075658789685_custom-a1bc99a60c3820732dba6d20cbed771476f5193b-s1200-c85.webp)](https://www.npr.org/2022/03/16/1087062648/deepfake-video-zelenskyy-experts-war-manipulation-ukraine-russia) --- ### AI and Ethics: Lessons - Declaration of Montreal - OECD - European Union - USA --- class: inverse, center, middle # 3. Fun projects --- ### Fun project 1 - With the advent of powerful language model of the likes of [GPT-3](https://openai.com/blog/gpt-3-apps/) (OpenAi), [T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) (Google), [Megatron-Turing](https://www.microsoft.com/en-us/research/blog/using-deepspeed-and-megatron-to-train-megatron-turing-nlg-530b-the-worlds-largest-and-most-powerful-generative-language-model/) (Microsoft) and the biggest LM as of the time of writing this presentation, [Gopher](https://deepmind.com/blog/article/language-modelling-at-scale) (DeepMind), we are now seeing those models being scrutinized on what they actually are able to do in terms of language understanding. - These models are tested with a battery of NLP tasks they are able to master with sometimes super-human capabilities - A New AI Trend: Chinchilla (70B) Greatly Outperforms GPT-3 (175B) and Gopher (280B) [see here](https://towardsdatascience.com/a-new-ai-trend-chinchilla-70b-greatly-outperforms-gpt-3-175b-and-gopher-280b-408b9b4510) --- ### Fun project 2 [![](https://camo.githubusercontent.com/37b7b9eeea773388ddf21aa313800d83dfcc99dc8358fad14f9772ccd43106e8/68747470733a2f2f78696e6e74616f2e6769746875622e696f2f70726f6a656374732f47465047414e5f7372632f67667067616e5f7465617365722e6a7067)](https://github.com/TencentARC/GFPGAN) --- ### Business Idea [![](https://api.deepai.org/job-view-file/d4c91009-46c4-45ff-96e2-b4374c3c431b/inputs/image.jpg)](https://deepai.org/machine-learning-model/nsfw-detector) --- ### Face recognition Go to www.lab.warin.ca --- class: inverse, center, middle # 4. Conclusion --- ### Conclusion - The era of data is upon us. It is proliferating at an unprecedented pace, reflecting every aspect of our lives and circulating from satellites in space through the phones in our pockets. - The data revolution creates endless opportunities to confront the grand challenges of the 21st century. Yet, as the scale and scope of data grow, so must our ability to analyse and contextualize it. Drawing genuine insights from data requires training in statistics and computer science, and subject area knowledge. - Putting insights into action requires a careful understanding of the potential ethical consequences - for both individuals and entire societies. --- ### Conclusion - The technology exists, you need to understand it: - others will use it and impose it on people who do not understand it (Facebook, TikTok, Sun Microsystems, etc.) - ethical issues - ... also beautiful solutions for a lot of our societal issues --- ### Conclusion - ML aims at: - reducing the model and data biases in sampling with regards to the populations - as such, it is a GIGANTIC improvement of current quantitative studies - Think about human biases with regard to AI biases - we can criticize AI biases by comparing them to an ideal model of our societies, but our societies are far from being ideal. - the true question is: is AI making it worse or better to our actual situation? - in other words, does it bring US to living in better societies? - should we use AI to diagnose or act? --- ### Conclusion - This is your responsability as the new generation, as the next managers, as the next decision-makers (public, private, ngo spheres) to think in these terms. --- ### TL;DR - <https://www.mobihealthnews.com/news/google-cloud-launches-vaccine-distribution-tool-local-governments> - <https://www.omnicalculator.com/health/vaccine-queue-ca> - <https://www.relationrx.com/> - <https://www.nytimes.com/2021/02/07/technology/vaccine-algorithms.html> - <https://digital.hbs.edu/data-and-analysis/brandeis-marshall-on-the-potential-for-data-equity/?mc_cid=b20e4dd67d&mc_eid=e1df353371> - <https://aiethicslab.com/big-picture/> - <https://medium.com/eoraa-co/trending-use-cases-of-gpt-3-by-openai-56318b6a9184> --- ### References - de Marcellis-Warin, N., Marty, F., Thelisson, E. et al. Artificial intelligence and consumer manipulations: from consumer's counter algorithms to firm's self-regulation tools. AI Ethics (2022). https://doi.org/10.1007/s43681-022-00149-5 https://link.springer.com/epdf/10.1007/s43681-022-00149-5?sharing_token=2U3ymK33r7VjhfTP0G9Vz_e4RwlQNchNByi7wbcMAY6YVOLTQlbQZlCAC5_IDA3UY6v2hrMRX4ZLWdwfMNc0OsCGeIz8HcopZxNE3OJfJdsGgXgsX4uholZq_OtMgEn0TlIMRUkGTnFG2P9heu7Cog1iW7gEewok0uc8PiQEhQA%3D