Addressing the opportunities and risks of artificial intelligence

Webinar

    Description

    People with disabilities experience both the greatest opportunities and the greatest risks of artificial intelligence (AI). AI, in mechanizing statistical reasoning, performs best when processing situations and people that are statistical averages. How do we take advantage of the power tools of AI while preventing the potential harms? How can we ensure that people with disabilities can participate in the AI-altered economy and workplace? This webinar discussed emerging regulatory standards and efforts to train AI to serve individual differences.

    Transcript
    IDEA Spreaker Series_Addressing the Risks and Opportunities of AI_Transcript.txt
    >>Emile Tompa: So welcome to the IDEA Speaker Series. Our regular host Dan Samosh is off for this week so I'm filling in for him. I'm Emile Tompa, Executive Director of IDEA and Senior Scientist at The Institute for Work& Health, um hosting with me is Francis Fung, Engagement Co-lead for IDEA and Rehabilitation in Clinical Service National Manager from the mar...for the March of Dimes. Please note, we are recording this session and recordings will be made available on our IDEA website https://vraie-idea.ca/ that's https://vraie-idea.ca/.

    For those of you who may not know much about IDEA, um it is a Social Innovation Laboratory focused on helping create stronger and more diverse labour markets that include persons with disabilities through knowledge to practice. IDEA is the acronym for Inclusive Design for Employment Access.

    Before we dive into the webinar, I'd like to share a land acknowledgement relevant to the IDEA National Office in Toronto. So where the National Office of IDEA's Social Innovation Laboratory is situated, on what has for thousands of years been the traditional lands of the Huron-Wendat, the Seneca and most recently the Mississaugas of the Credit River. Today�this land is still home to many Indigenous People across Turtle Island and we are grateful to�have the opportunity to conduct our work on it.
    Please note, that this acknowledgement is relevant for the Toronto area, however, since we're gathering for this meeting virtually we suggest you think about the communities in your respective locations as well.

    So I'm going to pass it over to Francis to introduce the webinar; the speaker for today's session.

    >>Francis Fung: Thanks, Emile um and if you're new to the monthly speaker series, the series serves as an opportunity to hear the latest about current topics in the work disability policy arena and learn about activities underway within and beyond IDEA. Before I introduce our speaker for today, I want to let you know that we will be hosting a Q&A session at the end of the webinar for about 10 to 15 minutes, but you can type your questions in the Q&A box at any time throughout the presentation and we'll try to answer as many questions as we can at the end of the webinar.

    Our speaker for today is Dr Jutta Treviranus, the Founder and Director of the Inclusive Design Research Center and a professor at OCAD University. Jutta is also a co-lead for IDEA's hub on disruptive technologies and the future of work. Jutta has coordinated many research networks focused on the future of work and employment with and by people with disabilities. Jutta's vast achievements include the Queen Elizabeth II Diamond Jubilee Medal and being recognized for her work in inclusive artificial intelligence recently by Woman in AI, with the AI for Good DEI AI leader of the Year award in 2022, as well as being listed as one of the top brilliant and inspiring women by Women in AI Ethics this year. Jutta is a world expert and takes on many advisory roles, and of note, she is the current chairperson of the Accessible and Equitable Artificial Intelligence System Standards Committee

    Jutta is joining us today for her presentation titled Addressing the Opportunities and Risk for um of Artificial Intelligence sorry... and I hope you are as excited as I am for this, so welcome Jutta and thank you for being here, it's a privilege to introduce you and now over to you.

    >>Jutta Treviranus: Thank you, Francis and Emile, and thank you to the IDEA project for supporting some of the work that I'm going to be talking about. So as Francis mentioned I've addressed my talk addressing the risks, or I've named my talk, Addressing the Risks and Opportunities of Artificial Intelligence; and um AI is a technology, but as we've come to realize that uh AI is as a technology involves organization procedures, symbols, new words, equations, and most of all the mindset; and I hope to address all of these aspects of AI as it relates to work and disability. um

    Given the aspirations of AI developers to turn over decision-making to AI, we've advocated that what we need to do is to "progress at the speed of trust" and trust that is earned and maintained, and unfortunately, when we hear about trust from AI developers, as I heard about yesterday, I was in the world Summit on AI, where many of the leaders of AI uh were present and were presenting. um the A common thread was trust us to break things, to make mistakes, and to push forward due to geopolitical and economic imperatives; but to have the right to ask this, of "the community", the disability community and the accessibility community these developers need to have a deeper understanding of the relationship between disability and technology. um There's a saying, for most people technology makes things convenient, if you have a disability technology makes things possible; and therein lies an awesome responsibility, because technology is a relied upon on to speak, read, write, learn, affect the world, navigate the world, eat, express love, remember, plan, breathe, and even love and live. Our relationship to technology is by necessity more intimate. It is essential because if you have a disability you usually have no choice, it is what makes things possible. This relationship also makes people more vulnerable. We should not have to give our trust to an abusive partner, and people with disabilities are disproportionately vulnerable to the mistakes, to the breaking of these technologies.

    Beyond guarding our homes, it is implanted in our brains and in our vital organs; and if you have a disability the opportunities and the risks are at the extremes, and the extreme opportunities often mean that people with disabilities are the poster child of AI. They are used to justify the risks and the need for progress, and these extremes should not be ignored. There are extreme opportunities, um there are opportunities to recognize speech gestures patterns, find a target object or pattern, match and label objects, remember forever, and remind on time, sort possible paths to find the optimum, detect common mistakes and correct them. There is a a huge number of uh a assistive technology or miraculous technologies that change people's lives emerging all the time. It is wonderful at mechanizing the the formulaic, recognizing objects, if you're blind translating gestures, acting as robotic personal service workers, creating intelligent prosthetics, restoring vision, and reading your mind through your EG patterns. No wonder disability is a poster child of AI. There's a new assistive technology announced almost every week.

    Among the current functions, there is AI for image to text description such as Beyond Seeing AI, Jaws, Picture Smart AI, and there was just a new one released yesterday. There's realtime captioning we're using it probably right now, um and that is emerging on glasses, so you can see the captions on on your glasses um with Transcribed Glass. There's sound filtering so that um we can get rid of the noisy environments. There is voice control such as VoiceITT. There is Personal Voice Output announced by Apple, Personal Voice Exoskeleton Optimization. standard uh Stanford has an exoskeleton that they uh have just uh published regarding gesture control and among these are Seable the Wheelie which allows you to control your wheelchairs through gestures. Smart home systems and robotic assistance, memory and recall, Rewind AI, simplification and summation detangle, AI writing support; um I'm sure that many of you have used chat GPT, um and of course uh many of you will have heard of Elon Musk's announcement regarding his fora into the brain-computer interface so implantable AI in your brains, and not just for assistive technology but in the area of accommodations and disability supports there are assessment tools, monitoring and evaluation tools, training and configuration tools that are using AI, but even in these extreme opportunities and these amazing life-changing technologies there are flies in the ointment.

    AI in assistive technology can lead to privacy breaches and data abuse and misuse, surveillance and manipulation, homogenization, inverse benefits, connectivity issues and hallucination. In terms of privacy breaches. um How many of you have just simply said agree to the service contract? often what happens is you have to agree in order to use the the technology, um and therefore you're um making yourself vulnerable to data sharing. Who else gets the data especially if we're videoing our family to find out what the label is on a piece of medicine? um There are is an emergence of on-device AI, but at the moment most of these technologies are going to the cloud, and therefore become data that is then fed to an AI system which of course is um not often not very well protected. uh There is uh we're exposing ourselves to surveillance and manipulation. um the AT user but also the individuals that are helping someone, targeted advertising, um There is quite a bit of data uh gathered regarding our reaction to advertising which then causes additional targeted advertising and a manipulation towards purchasing, or um uh messages that are somewhat manipulative. Stereotypical op optimization metrics are often making assumptions about us that may not be accurate, or that we may not want to reveal to an AI, or um others that are using the AI and then um there there's a pressure to be typical. um AI is a homogenization system um because AI can hide your uniqueness. Does it make it harder for people who can't use the AI? We have a loss of differentiation, and um, unfortunately, the benefits of AI are inverse to the needs, because the AI systems work the worst for people that need it the most. the um If you have a voice that or speech that is far from the average, if you live in an environment that is far from the place where the AI was trained, um if you use a language that is unlike the language the AI was trained on, then uh it's going to work the worst for you. and the

    What happens therefore with uh employers who assume well these individuals have been accommodated and uh we don't need to do anything further because they have AI. Also because much of the intelligence is in the cloud, they consume a great deal of bandwidth, and this is not a cost that most financial benefits programs such as ODSP or ADP in in Ontario recognize; and of course um many of you have probably heard of "the hallucinations of AI", and that's a somewhat misleading term. AI is trained to be believable not to tell the truth. um The goal of AI is to produce something that someone can interpret as true or can um mimic uh human assertions or human um reasoning, and if we uh look at what um certainly the large language models and the Gen AI is producing they have been fed on data that is often ableist slurs um they uh are told and in some of the data that disabled people are inferior, unattractive, tragic victims, suffering. There are many bad jokes about disability um in that data set and there's also benevolent ableism. The um propagation of what has been called inspiration porn; but even the protections against these tend to backfire, so what we have is captioning toxicity filters which don't let you talk about critical factors. In our lectures at the University there is a course on the sex trade and uh the students that are reliant on the captioning to understand the lecture are given um replacements to the topical words so they cannot follow the lecture. There are um incidences where facial difference, people with facial differences, are cancelled from Instagram, and there's an assumption that um there is no um disability is a topic that should not be spoken about, with uh uh or has no sex, or there is that people with disabilities are asexual. So the uh the both the image producing systems and the the textual producing systems uh filter that out, but um I'm most worried about the extreme risks of AI. AI is also an infra infrastructure of disability discrimination.

    The way I like to talk about AI to people that are not familiar with it, and I've been working in uh with AI since the 80s and through multiple generations of AI. All of them are of course controlled and produced by people and that seems to be a point that is missed in some of the dystopian and utopian um narratives. The first generation of AI, people told AI, here are the rules follow them literally and accurately. The second generation of AI, um we told the AI, here is available data use statistical reasoning to optimize the selected metrics, and in the third generation, when everyone started to become aware of AI, we said here's all the data we have, you figure it out and make new connections and determine weights based on past relationships and possible combinations.

    The extreme danger of AI, is when AI that second generation of AI, is asked to find, match, sort, label, measure, optimize, calculate, and analyze people at scale, and this is most often applied in the employment arena. There's a problem and people with disabilities are the collateral d uh damage and um the way AI is currently designed is hostile to difference, and disability implies difference. The problem of course precedes AI. um AI is simply a power tool, it is taking something that we have done in the past ourselves and it is mechanizing it. it is mechanizing, accelerating, amplifying, and automating existing patterns based upon data, and all data, of course, is from the past; and it's doing it more efficiently, accurately, and consistently, and thereby it is propagating discrimination faster more efficiently and accurately; and this unfortunately despite the fact that there is now a burgeoning AI ethics industry is missed by AI ethics efforts; and to me it is an ex an existential flaw in our assumptions about truth and our sense of what is of value. My first alarm to this and there are probably people in the audience that have heard this story, because it's an off- told story and um it's frequently repeated, um was when in 2013, so now more than 10 years more than a decade ago, I was asked by the Ministry of Transport to assess some um AI systems that would guide automated vehicles through intersections; and I decided to test them with a um a capture of a friend of mine who pushes her wheelchair backwards through the intersection. Even uh people who encounter her in the real world in the intersection, um often think she's lost in control and they push her back to the um side of the intersection that she came from. All of the AI systems that I tested um in the simulation chose to run her over, which would effectively they decided to proceed through the intersection, they assumed that that she was moving forward. They all said um these are immature models, come back when we've given them more data about people in wheelchairs and intersections.

    Now the the the point of the story that is frequently missed, um is what happened then, and this is the the the most important point about this; is that um the systems that were more mature that had been fed a ton of data about people in wheelchair and intersections actually chose to run my friend over with greater confidence; and there in lies the problem with AI ethics efforts, because even within the disability community and the accessibility community the call is for full proportional representation within the data. It's all about filling the data desert, making sure that people with disabilities are included in the data, um but of course even with full proportional representation uh the uh the system is a statistical reasoning machine, and to illustrate that further and apologies to those who have seen this before as well; um one of the things that I've been doing for the last 40 years in the in the field is to collect from anybody that I meet. I ask them, what do you need to fully participate? what do you need to thrive? I have this massive set of data and however I plot it, and I and the only way to plot it, because it's the answers are so diverse is in a high-dimensional multivariate scatter plot and it looks like a normal distribution, and I call it my human starburst, and um what you'll note is the pattern that Petto um discovered when he talked about the 8020 principle. 80% of the needs are clustered around 20% of the middle space and 20% are distributed throughout the the 80% of the peripheral space. and um The uh needs in the middle are close together meaning they're very similar to each other, and there's huge economies of scale there, but the needs out at the periphery are far apart, um meaning they're more different from each other; and of course the needs of people with disabilities tend to be out at that jagged edge of the human starburst, and the pattern with design, and almost anything we encounter in our lives, all technologies is that design works for the 80% quite well. As you differ from that it becomes difficult to use, and as you as your needs are out at the outer edge most designs whether it's transportation, housing, the tools we use within employment, the way we apply for employment, all of those tend not to work; and unfortunately the same pattern because we are using um statistical reasoning in our research, in our knowledge production et cetera. AI is is a mechanized statistical reasoning machine and so what happens is any determination from that second generation of AI um is highly accurate in the middle, inaccurate as you move away from the middle, and wrong for anyone out at that jagged edge, and of course, this is an existing pattern and it feeds into the disparity that we're currently seeing within our society; because we have a fit of most designs if you're average, a misfit if you're not average.

    The products uh especially with Moore's law, the the digital inclusion products are becoming more available, reliable, functional, the cost is decreasing, um but if you are out at that periphery um things are getting worse and worse knowledge, I'm a professor, I live in academia, the gold standard of research, truth, evidence, knowledge is based upon statistical reasoning, and um the uh the subjects that that don't use that are uh defunded and ignored, um and of course uh if you um can't have statistical power in your research, you're less likely to be published in a journal, you're less likely to get funded, you're less likely to receive tenure and promotion. So we have a vacuum of knowledge about about the needs and the realities of people who are out at that outer edge. Education, we are um doubling down on standardized learners um or students and we're trying to serve uh the employment market which is looking for replaceable workers, uh and even um and as we are defending democracy, and as democracy is under attack, we've reduced it to one person, one vote,without um enough attention to human rights; and if you um have one... if you focus on one person one vote then the trivial needs of the majority will um outnumber the critical needs of the the minority; and of course this doesn't just harm people with disabilities it uh harms society as a whole. We reduce innovation because we have greater conformance, greater lockin, less flexibility, less extensibility, and less resilience, and responsiveness; and we um subject ourselves to mass production, mass communication mass marketing and a popularity push which narrows our focus of on the world. So we are in essence through our practices and our conventions, we are amplifying this, we are reducing diversity and denying complexity and um homogenizing towards a monoculture, which within an employment scenario, um makes those organizations that have a monoculture within their organization much more vulnerable um to unexpected changes within the environment. We all know that the two predictions that uh we can definitely depend upon is death and disability. At some point all of us or someone we love and we care about quite a bit um will be out at that outer edge and um if however, we design for that entire spectrum then everyone has room for change and growth and that is where the Innovations occur and where the weak signals uh of issues to come uh appear because people out at the outer edge are most vulnerable to the cracks that are emerging um and it isn't the complacent that uh have the ideas for how to innovate because they do not need to change the world.

    The unfortunate assumptions about pervasively deployed AI hiring tools, is that past success equals future success and optimizing data characteristics associated with past successes increases future successes and the data characteristics that determine success need not be specified or known to the operators of the AI. So if you're purchasing an AI hiring tool um then you're not told um what is the data that this was trained on or what is the data that is used in each update, and AI according to the EEOC the equitable employment organization within the US is um used by more than 90% of organizations whether they know it or not.

    There is some form of AI used in employment decision systems. It is um employed in recruiting announcements, where should you publish your um announcement regard regarding a hiring opportunity. It is um used in application filtering, in interviews and assessment, in productivity monitors and surveillance, in promotion or disciplinary actions. um All of you are probably exposed in some way through your job to AI decision systems, and the bias that is towards an optimal pattern or an excess success profile that comes in data um and the data of course is from the past, equals bias against difference, and of course people with disabilities are different. AI is getting better and more accurate but as it gets better and more accurate, it is getting more accurately efficiently and consistently discriminatory. Beyond employment um the some if you have a disability, you're encountering this pattern in all sorts of ways. It is deployed in employment, in academic admissions, in medical calculators and triage tools, in policing and parole systems, in immigration and refugee adjudication, in tax auditing loans mortgages, security flagging; um it at the community level it is used for to make decisions regarding evidence-based government investment political platforms, public health decisions, urban planning, emergency preparedness um systems, security measures, and um the more trivial areas such as new news coverage, products that will be marketed uh within any uh platform, GPS routes, supply chain priority design features selected by companies who are creating products; and what we have is a series of bad unfair inaccurate decisions. um, unfortunately, um even the positive things that AI uh produces um are are problematic for people with disabilities, but also at the other end people with disabilities because they're not recognized um by the systems, are subject to false flagging in suspicion machines such as security um systems at airports, or um the CRA auditing tax auditing; and one of the things that unfortunately I've been experiencing since I started to talk about this is uh that there is and an I "I told you so" phenomenon happening in the harm and incident databases that have emerged regarding the impact of AI.

    um There are disproportionate reports related to disability and those are increasing all the time, and there are some heartbreaking um issues that are emerging parents with disabilities are falsely flagged as unfit, um and there is a rise in iatrogenic death and illness if you're not like the average, and due to the rise in medical calculators. Iatrogenic meaning the death and illness due to the treatment not to the illness that you're presenting um and unfortunately most of the field is focused on privacy protections. They think that the harms of AI is that you're losing your privacy, but the reality for people with disabilities is frequently you've already bartered your privacy for essential services you've had to hit the accept button in order to get things that are essential for you to live; and privacy protections that are there when governments and services assure you that we are private by design, um which is usually anonymization at source, if you are unlike the average, if you're highly unique you will be reidentified. If you're the only person on the block that is ordering say I don't know a prosthetic or an ostomy bag or something of that nature, you will be reidentified it's easy to figure out who you are and what other data is associated with you; and um the other privacy protections that are supposed to be a a better um means of protecting your privacy, uh which is called differential privacy where you remove the data that would identify you through your uniqueness actually removes the data that is needed to make the AI work for you, and um the recovery from data abuse and misuse is not available. the um the um If you have had a a privacy breach there's hardly any service that will help you figure out how do you get over this fraud how do you deal with the financial fraud or with the the issues um regarding the release of your medical records or your insurance information; but the the primary message that I want to give is that statistical reasoning as a means of making decisions does harm, assuming that we know about the majority, what we know about the majority applies to the minority does harm, and um despite all of the AI ethics efforts that are emerging, it's more than addressing data gaps, it's more than removing human bias from algorithms, and it's more than removing the stereotypes from labels and proxies. Statistical discrimination um also occurs unfortunately in these AI ethics measures. um Most of them are based upon impact assessments and upon risk-benefit analyses and the the the hugely impactful things that happen to a minority or an outlier are seen as insignificant when determining the metrics and thresholds. Within these protections or invisible or insignificant in error testing, there's an interesting study that shows that how many deaths you would have to have before the medical error tests would say that there is harm, um within a particular um medical system or within uh some of the the uh AI technologies that are released; um and I won't get into the details of um how uh determinations are made, that there are issues. but suffice it to say that most people with disabilities are falling through the cracks and stranded at at the edges of bounded clusters where um that are used to determine that something needs to be acted upon. We're invisible in a risk-benefit framework, an anomaly, merely an anecdote.

    So um given all this doom and gloom and sorry for the the uh very gloomy picture that I'm painting, but I think within AI, because AI is a mirror of our current convention, there is also an opportunity to rethink, and with an employment; I think there's an opportunity to rethink human resource processes, because human resources is all about creating replaceable workers. This assumption that um people are like these carbon copies of a job description rather than thinking about diversifying perspectives within our workplaces, investing in human potential, um hiring people not based upon a a formulaic set of metrics, but looking at what can um how can we foster human potential within these diverse perspectives; and looking at adaptable roles rather rather than replaceable workers. Thinking about how we can adapt the situation within the workplace to the the individuals that are part of a team that we've recruited. So how are we addressing trustworthy AI, I talked about progress at the speed of trust and um we're doing quite a number of things, and luckily IDEA is supporting some of these. um One of the things that um we're doing is just to um get across the idea of what is happening in our conventions, how have we done this in the past, um and uh a very very simple thing is most of you have probably been exposed to word cloud, where there's a question that's posted and everybody's supposed to post an answer to the word or to the question, and um so you you know, um what is your favorite colour? or what is the topic that is most um top of mind to you? um and what will happen is in a standard word cloud is the most common, the majority words will grow in size and go to the middle so that everybody attends to them, and they pile on so it becomes this echo chamber. What we've done is we've actually inverted that, so that the um the novel, unique, outlying, minority words grow in size and come to the middle. just as you know a twig to people's attention. Hey you know, this look at what we're doing here in terms of the topics we pay attention to. Is it only popularity that we should use as a metric for prioritizing? um We we've also created something called "The Trust Meter", or we're in the process, we're in this is under development, where we're looking at the suitability of the decision system towards the individual or group that you're deciding upon. So based upon the data that is used to train the AI, is this person beyond a threshold of um representation within the data where the the determination by the AI will be inaccurate.

    um We're doing things like... Since it's um playing with algorithms, uh we have an alternative algorithm which we call our "lawnmower of Justice", basically we mow down the Gaussian Curve so people who are um like other people don't have the advantage, and the AI model needs to attend to a larger spectrum of data. um the uh One thing that we're working on right now, which I'm quite excited about is "The um How Would You Do Things Differently Project", I was uh challenged by people that have heard my um issues with AI. Well uh we'd like to give you an opportunity to do it somewhat to do it differently, and um how we're doing it differently, is we're looking at chatbots for people that need them the most, Who needs communication systems the most, who needs language systems that assist in language, and so what we're doing is um we're working with people who are non-speaking, who um basically the the community of AAC users, who um learned um or who um became literate uh through uh bliss symbolics which is a universal symbol language, and this is a very community-driven um project, where it is the the end users who are the individuals that are designing the system, and who are determining what the design decisions should be rather than basing it upon the average or the typical or the majority; um we're um creating a system that can be one-size fits-one that is deeply individualized to each individual, um uh AAC communicator; and we call it The Baby Bliss Bot Project. What I have up on the screen is the Bliss symbolics representation of Baby Bliss Bot and Bliss was a universal symbol language that was developed by Charles Bliss uh during the Second World War. He uh saw it as a means of connecting across languages and across cultures, uh and rather than being a phonetic language, and of course phonetic languages are very difficult if you're non-speaking, if you're a child who cannot speak you cannot sound out the words. So uh the phonames are are uh very mysterious things because you're not able to produce the phonames and therefore there's a huge barrier to literacy, but also these indiv these uh children tend to have uh issues with motor control, um but um uh as they approach employment, the other really interesting thing about Bliss is you're able with only a few symbols to communicate a very abstract and complex message; um because each of the symbols is very very heavily um meaning-laden, and so rather than having to um type out um 200 say letters um even if it's using words, which if you are using single switch scanning can take a huge amount of time at each point there's an opportunity for error. You can choose a few symbols and the meaning is clear. um Unlike say if you are saying bank and people don't know are you talking about a bank or the bank of a river. um Bliss symbols are um idiographic they're symbolic and each of the symbols has the meaning um within it rather than the phonetic pronunciation.

    um and What we're looking at is how can we use uh this Baby Bliss Bot to allow individuals that have faced the greatest barrier to employment, but also to communication to be able to communicate effectively and efficiently and to be understood; and uh this is an um a a project that um has participation from many countries and many experts in AI linguistics. um A whole range of I mean there are more than nine disciplines, more than nine countes, and uh we we are um attempting to do uh quite a number of of interesting things with it. um the Associated with this, is a project we call Mad Escapes from the Grid. One of the things that we discovered is in working with the individuals um seeking employment who have who rely on AAC, um frequently the vocabulary available to you is uh dictated by the a grid or an on-screen keyboard, and uh the um unfortunately unlike many of the other technologies there are actually fewer choices available to you if you're non-speaking um than there were in the 80s. um Or fewer ways in which you can access um the technology. So most people that have um few voluntary movements and cannot directly point to an item that they want to choose um are dependent on something called uh Row Column Single Switch or Row Column Scanning where the you hit the switch, and it scans through a set of rows, and then you hit the switch again, and columns and then you hit the switch again when it arrives at the selection that you want, and so the um this is obviously extremely inefficient and extremely slow; and so we're also looking at how do we um use AI to be able to recognize other voluntary movements that can be reliably made, and then assign those to different actions, and how can we create a display of possible choices in such a way that it matches our um the way that we formulate messages or the way that we think about what are the the uh choices in the way that we want to communicate a message. um With that we are adding uh context awareness uh awareness about who the partner is, such that this um system, this Baby Bliss Bot will be a companion um that will stand in for the the familiar companion that people have relied upon in um very in many situations. Especially when those companions either pass away, or the the trusted aid or friend is not available; and then um the other thing we're we're doing is we are in the midst of, um or we have drafted a uh regulatory standard for the Accessible Canada Act called the Accessible and Equitable AI Standard and um it will be released and uh before the actual regulation is available as a guidance document from ASC and within that um are layering it on top of other AI protections such as Aida or the EU AI Act which you may have heard about in the news, or the um US um AI initiatives through Nist um or the directive on automated decision systems.

    According to the OECD, there are now over um 800 uh either regulatory um documents being drafted or uh guidance or commitments or MOUs regarding uh protections for AI, but none of them, unfortunately, um are contain the the four things that we are looking at, which is the accessibility of the AI ecosystem, not just for people with disabilities as consumers of AI, but also as developers, designers, implementers, monitors, evaluators of AI, and then um we wish to deal with uh statistical discrimination. So more than uh the data gaps or the privacy breaches etcetera that are addressed within the other protections; and um because most of the other um legislation is looking at impact assessments or um risk assessments, uh they and no protections are needed, if some is deemed to be of low risk or low impact or medium impact or medium risk. um

    One of the things we also want to pay attention to is cumulative harm. If every decision if every what is deemed to be low risk or low impact decision is against you then it's like death by a thousand cuts, and how do we address that; and then lastly we want to make sure that when people learn about AI um they also learn about equitable and accessible AI and that people with disabilities can participate in um in uh AI education, that AI education is accessible, and uh the this is seen as the first phase of a series of um regulatory standards that we will release in this area, uh and it is in four parts um one about accessible AI, one about equitable AI, one about organizational processes that support accessible and equitable AI and one about those education concerns, and um IDEA has been supporting us in making sure that people with disabilities in these standards committees have the capacity to uh be able to advocate and to and have the are not um sitting on the committee in an inequitable decision, because certainly uh the um regulated entities and the industry is well resourced with the research and the knowledge and the technical um resources to be able to argue their point and we want to make sure that people with disabilities also have the um the resources to be able to understand the arguments in the technologies. SDo we've been producing visualizations and case examples and we have created an accessible guidance document that will be shortly released and amongst those are um ways in which we're trying to make the um the AI process um and what is needed in terms of the protections as understandable as possible, and what I'm showing on the screen at the moment is a flow chart um and I'm um you will have access to uh these this PowerPoint presentation so there um it is described in in great detail. So um to conclude, what we want is we want AI to to move forward to take advantage of those extreme um opportunities but at the speed of trust, and uh trust that is earned, and trust that is proven; and for a um a plug I also want to um tell you about a a conference that we'll be holding on May 27th and 28th called Accessible Canada Accessible World and this will be of the topics that we'll be discussing there. We're running it like a flipped classroom where the presentations will be available ahead of time, and we will use the time together um either online or at in Montreal at Concordia University, to develop plans and strategies for how to address a series of critical problems like uh this problem with AI, but I want to leave you with um the uh the note that intelligence that works with the edge of our human scatter plot is better able to adapt to change and respond to the unexpected, detect risk, transfer to new contexts, results in greater dynamic resilience and longevity will reduce disparity, and um may hold the key to our survival and certainly within the employment sector, um that is what organizations need um because we're in a fairly Rocky future at the moment and I'd love to continue the conversation.

    >>Francis Fung: Thank you Jutta for a very thought-provoking uh presentation um we're going to get to uh some questions since we have a few minutes left if that's okay with you. um There are a couple of comments, positive comments about um your idea of inverting the process regarding the word cloud flipping perspective, and also statements regarding adaptable roles rather than replaceable workers from a human resources perspective. So just want to share that with you. um we're going to get to a couple of specific questions, um one of the questions that we have is regarding uh whether art and collaborations with artists um can play a role in creating AI that does not automate existing patents and thereby can propagate biases and discriminations and if you have any advice on how AI researchers can uh or should approach collaborations with artists.

    >>Jutta Treviranus: yeah so I I I think um I mean what one of the the distressing things to see is that in education a lot of the talk of AI is we need more STEM, we need more engineers, we need prompt engineers we need um but I I I think when we look at the trajectory of where is um where is work going, and you know what is the future of work, what we actually need are critical thinkers, we need um people that are who cannot... I mean uh STEM is formulaic, and AI is great at replacing formulaic things. If we're preparing our students to um to become or to be adept at those formulaic things then we're um we're creating this collision course. So I think the the future is in art in the humanities, in in that human judgment, and art, um artists, and designers play a huge role there; um yeah uh because we need that human creativity that uniqueness, that way of differentiating yourself by poking at um our perceptions, at making people think and unfortunately at the moment art and design is all worried about GenAI and is GenAI going to you got taking way is is GenAI going to replace the um artists etcetera but no I I definitely don't think so. um the There um there's a talk that we just recently gave about art and design and AI and the possibilities that um uh AI can do some of the mundane formulaic things that artists rely upon, but it will act as a way of um increasing or um a as a platform upon which to show and advance that that human Ingenuity and human creativity.

    >>Francis Fung: Thank you Jutta, um there's another question specifically about the Bliss symbols that you talked about earlier, um and the question is regarding whether the Bliss symbols can be usable by someone with Aphasia uh someone that has damage to a Language Centre of the brain uh as opposed to being physically unable to speak.

    >>Jutta Treviranus: Yeah and that's a fascinating area, um because they're they're symbolic and they are like uh the um Chinese writing system the the symbols actually look like what they're representing um but they're not pictoral, with picture systems it tends to be very concrete it's very hard to to um talk about abstract concepts when you're using pictures. uh So there is some research that that shows that it is easier for someone with Aphasia who has um damaged to their language centres uh and especially um well they're depending upon uh what where the Aphasia came from, or I mean what what is the the neurological cause of the Aphasia, that p uh Bliss symbols are um far easier to use than um phoname and phonetic systems. um the The other good thing about Bliss symbolics is that it can be very telegraphic so that you don't need to choose very many things to get your message out; and actually what we're doing with The Baby Bliss bot is this bag of words strategy, so you can pick a set of words in any order, it doesn't you don't need to work worry about syntax or um uh grammar. uh You just give the meaning concept and uh what large language models are actually quite adept at providing then is a proper sentence using those words. It is able to deduce that if I say door cold, um it it'll you know please close the close the door I'm getting cold um or whatever, uh so we're using the intelligence that is already within the large language model. the The the one bad effect of that is that some of our um AAC users are saying well that doesn't sound like me anymore, like their quirky turn of phrase the jocular way in which um they interact is lost, um or is overcome by the uh the um large language model, and so we're we're starting with mostly transactional but now we're also tuning the AI to be to um more more like, or to use the idioms and the the the turn of phrase that uh these uh AAC user is comfortable. with.

    >>Francis Fung: Fascinating, um so we're just at about 1:00 now. um There's a couple more general questions on the uh Q&A section, but I'm afraid we don't have time to get to it. So we'll try to send them to you so that you can respond to them. um We are going to have to PowerPoint presentations available for the attendees as well. So on behalf of IDEA I want to thank you for taking the time to present and of course um all the attendees as well. I hope you all enjoy the uh presentation and come back for our next sessions or next speaker uh series, and stay tuned. Thank you very much and have a good day everyone and thank you, everyone.