• Skip to Content

Kelley School of Business Indiana University
  • Undergraduate
    • Why Kelley?
    • Admissions
    • Academics
    • Scholarships
    • Student Life
    • Pre-College
    • Current Students
    • Parents
  • Graduate
    • Full-Time +Flex MBA
    • Kelley Direct Online MBA
    • Online Master's Degrees & Certificates
    • Specialized Masters
    • 3/2 MBA
    • MS in Accounting with Data and Analytics
    • MS in Finance
    • MS in Healthcare Management
    • MS in Information Systems
    • MS in Management
    • Kelley Direct Online EDBA
    • Indianapolis Programs
    • International Programs
  • Faculty, Research & PhD
    • Research & Publications
    • Faculty Directory
    • Departments & Majors
    • Centers & Institutes
    • Courses
    • PhD
  • Executive Education
    • Custom Programs
    • Degree Partnerships
    • Student Resources
    • Case Studies
    • Professional Development
    • Meet Us
  • Recruiters
    • Graduate Career Services
    • Undergraduate Career Services
    • Indianapolis Career Services
    • Alumni Career Resources
    • Corporate & Foundation Relations
    • Indianapolis Corporate & Foundation Relations
  • Alumni
    • Who We Are
    • Get Involved
    • Career & Professional Development
    • Awards
    • Alumni Legacies
    • Events
    • Giving
    • Contact Us
  • About
    • Dean's Welcome
    • Administration
    • Kelley Women
    • School Profile
    • History
    • Visit Bloomington
    • Visit Kelley Indianapolis
    • Contact
    • Directory
    • Social Media Directory
    • Rankings
  • More
    • Centers & Institutes
    • Directory
    • News & Events
    • Give
    • Kelley Store
Professional Development
  • Leadership
    • Applied Leadership
    • Developing the Effective Technical Leader
    • Emerging Leadership
    • Executive Communications
    • Frontline Leadership
    • Leading Organizational Change
    • New Manager's Toolkit
  • Management
    • Business Essentials
    • Business of Life Sciences
    • Digital Teamwork And Business Skills
    • Design Thinking for Business Innovation
    • Negotiation Fundamentals
  • Technology
    • AI Agents
    • AI Applications in Marketing
    • Data Driven Decision-Making
    • Space Cybersecurity
  • Operations
    • Lean Six Sigma Green Belt
    • PMP Exam Prep
    • Project Management
  • Finance
    • Finance for Non-Financial Managers
    • Personal Finance
  • Leading With AI
    • Accounting
    • AI Strategy for Executives
    • Critical Thinking
    • Finance
    • Influence
    • Leading Change
    • Managing People
    • Marketing
    • Negotiation
    • Project Management
    • Self-Leadership
    • Strategy
  • FAQ
    A stylized image of an android representing artificial intelligence executive education

    Transform your leadership with AI strategy for executives

    Lead with confidence in an AI-enabled world and set yourself apart as a forward-thinking business leader. AI Strategy for Executives gives leaders the insight and tools to harness AI for innovation and strategic impact. Through this short course, you’ll explore emerging, cutting-edge technologies; understand how AI adoption is reshaping organizations and decision-making; and learn to apply AI-driven strategies that enhance leadership effectiveness.

    Register now

    1. Home
    2. Executive Education
    3. Professional Development
    4. Leading With AI
    5. AI Strategy for Executives
    • Accounting
    • AI Strategy for Executives
    • Critical Thinking
    • Finance
    • Influence
    • Leading Change
    • Managing People
    • Marketing
    • Negotiation
    • Project Management
    • Self-Leadership
    • Strategy

    Leading with AI: AI Strategy for Executives

    Leading AI Adoption in Organizations

    In this short course, we will examine AI through three different lenses, giving executives a comprehensive perspective of how to make AI applicable to their careers and their organizations.

    1. AI technologies: Analyze technologies related to AI systems, understand AI from a historical perspective, and examine a state-of-the-art take on AI solutions.
    2. AI for individuals: Explore types of technologies related to AI solutions including machine learning, deep learning, and those associated with generative AI.
    3. AI for organizations: Learn about AI's impact, focusing on digital humans and the collaborative work of human and digital agents; explore the use of generative AI and implications for productivity; and discover how AI affects organizational processes and determine best practices for its use in organizations.

    Upcoming dates

    Dates: March 24 to April 28
    Delivery: Online (instructor led), live sessions (90 mins), and weekly (7 to 8:30 p.m. ET)
    Duration: 6 weeks
    Price: $1,495
    Online: Tuesdays

    Register

    Learning objectives

    • Different technologies related to AI
    • The differences between machine learning and deep learning
    • Technology underpinnings of generative AI
    • Future of work from an individual perspective
    • Main factors influencing AI investments in organizations
    • Understanding of the issues involved in AI adoption and implementation

    Course details

    • Live-virtual online classes
    • Two 90-minute sessions for each of the three modules. Sessions occur once a week.
    • Attendance is expected, but sessions will be recorded for those who have conflicts.
    • Cost: $1,495
    • Discounts: Available for IU alumni, staff, groups of three or more participants, and purchases of four or more courses*

    *Purchase four or more courses in the Leading with AI series and receive a $1,500 discount. Pay a total of $4,480 (regularly $5,980), a 25% savings.

    Want to learn more?

    Fill out the form below to request more information.

    Showcase your new skills

    Each course offers the opportunity to complete an optional Action Learning Project, applying course concepts to a real organizational challenge you face. Participants who complete this project earn a digital badge, a verifiable credential you can showcase on platforms like LinkedIn.

    Kelley executive education digital badge for the AI course for executives

    Build toward a professional certificate

    Complete four courses from the Leading with AI professional development series and earn the Kelley Professional Certificate in AI Leadership.

    Pencil and paper icon

    Access a customizable supervisor request letter to support your case for attending a Kelley professional development course.

    Course outline

    Module 1 (two weeks):

    • Analyze technologies related to AI systems
    • Gain AI historical perspectives and a state-of-the-art take on AI solutions
    • Explore machine learning and deep learning
    • Focus on technologies associated with generative AI

    Module 2 (two weeks):

    • Examine the impact of AI on individuals
    • Explore digital humans and collaborative work between human and digital agents
    • Focus on how productivity may be enhanced by using generative AI

    Module 3 (two weeks):

    • Understand why AI impacts organizations
    • Address how AI can affect organizational processes and products
    • Learn best practices for implementing AI initiatives in organizations

    The Future is AI: Artificial Intelligence for Business

    In this webinar, Professor Alex Barsi Lopes discusses real-world applications currently reshaping major industries and strategic approaches to integrating AI into business operations. You’ll also uncover emerging opportunities for professionals in the AI landscape and career pathways in an AI-enhanced future. Gain new insights into how self-awareness, emotional intelligence, and AI tools can work together to enhance clarity and decision-making.

    Description of the video:

    WEBVTT 1 00:00:00.705 --> 00:00:02.915 [Kim Allison] Alright, well, it's one o'clock, 2 00:00:03.135 --> 00:00:04.675 so we might go ahead and kick it off. 3 00:00:04.835 --> 00:00:07.755 I wanna be respectful of everyone's time, so, um, 4 00:00:07.785 --> 00:00:09.715 just wanna say hi everyone. 5 00:00:09.765 --> 00:00:11.755 Thank you so much for joining us today. 6 00:00:12.165 --> 00:00:13.755 We're so thrilled to have such a great 7 00:00:13.755 --> 00:00:15.155 group, um, in this session. 8 00:00:15.515 --> 00:00:18.395 I think last time I checked we had over 700 people 9 00:00:18.995 --> 00:00:20.755 registered, so that's really fantastic. 10 00:00:21.295 --> 00:00:22.795 So thanks, thanks for taking an hour 11 00:00:22.815 --> 00:00:23.915 out of your day to join us. 12 00:00:24.335 --> 00:00:26.835 Um, I'm Kim Allison with Kelley Executive Education. 13 00:00:27.065 --> 00:00:29.275 It's my pleasure to introduce our speaker today, 14 00:00:29.335 --> 00:00:33.475 Dr. Alex Lopes, who will be diving into the future of AI 15 00:00:33.575 --> 00:00:34.795 and impacts on business. 16 00:00:34.815 --> 00:00:36.795 So, really exciting topic for today. 17 00:00:37.375 --> 00:00:40.675 Dr. Lopes is a clinical professor of information systems 18 00:00:41.255 --> 00:00:44.275 and Grant Thornton Scholar at the Kelley School of Business. 19 00:00:44.735 --> 00:00:47.155 He also serves as associate chair 20 00:00:47.215 --> 00:00:49.275 for Kelley Executive Education Programs, 21 00:00:49.275 --> 00:00:51.395 where he leads our Online MS programs 22 00:00:51.395 --> 00:00:52.635 and professional certificates. 23 00:00:53.025 --> 00:00:55.795 With that, I'd like to hand it over to Dr. Lopes, 24 00:00:55.815 --> 00:00:58.315 and thank you so much for your time and joining us today. 25 00:00:59.425 --> 00:01:00.815 [Alex Barsi Lopes] Thank you so much, Kim. 26 00:01:00.975 --> 00:01:03.575 I hope everybody can, um, hear me well 27 00:01:03.795 --> 00:01:07.415 and also, um, see the slides as we go through. 28 00:01:07.715 --> 00:01:11.015 Um, this weird accent that you hear right now is, 29 00:01:11.115 --> 00:01:12.615 I'm from Brazil, so, 30 00:01:12.835 --> 00:01:14.895 and actually I just came from Brazil yesterday, 31 00:01:15.035 --> 00:01:17.415 so my accent might be a little bit stronger than 32 00:01:17.415 --> 00:01:18.775 usual, so sorry for that. 33 00:01:19.595 --> 00:01:21.015 Um, so let's start. 34 00:01:21.275 --> 00:01:24.895 And, uh, so we kind of, uh, try to title this webinar 35 00:01:25.565 --> 00:01:26.615 more provocative, right? 36 00:01:26.615 --> 00:01:30.855 The Future is AI, uh, Artificial Intelligence for Business. 37 00:01:31.195 --> 00:01:35.055 But having said of that, if you're expecting, you know, 38 00:01:35.165 --> 00:01:36.935 some, like, you know, cool demos 39 00:01:37.035 --> 00:01:41.375 and some like, you know, hype about AI, um, sorry, I'm going 40 00:01:41.375 --> 00:01:44.895 to disappoint you because this is really, I believe, 41 00:01:44.895 --> 00:01:48.495 an opportunity for us to take stock about where we are 42 00:01:48.835 --> 00:01:52.775 and, uh, to really think about how do we move from, 43 00:01:53.475 --> 00:01:56.575 you know, the, the, hype to something that is actually 44 00:01:57.095 --> 00:01:59.855 valuable for organizations, really understand, you know, 45 00:01:59.855 --> 00:02:02.615 where we are right now, what are the challenges, 46 00:02:02.885 --> 00:02:05.175 what are the risks associated with AI? 47 00:02:05.675 --> 00:02:07.255 And eventually try to come up 48 00:02:07.255 --> 00:02:11.255 with some recommendations about how to navigate those risks, 49 00:02:11.555 --> 00:02:13.735 the kind of things that you need to pay attention 50 00:02:14.555 --> 00:02:18.495 if you are, you know, involved in somehow in your, uh, 51 00:02:18.555 --> 00:02:20.735 the AI journey of your organization. 52 00:02:21.515 --> 00:02:22.735 So this is the agenda. 53 00:02:22.785 --> 00:02:24.895 We're going to do a little bit of a recap, right? 54 00:02:24.995 --> 00:02:27.415 Uh, essentially where we then talk about 55 00:02:27.425 --> 00:02:30.815 where we are right now in both applied and GenAI. 56 00:02:31.355 --> 00:02:35.135 Uh, what are the main risks? Why do most projects fail? 57 00:02:35.915 --> 00:02:39.135 And, um, how to succeed, uh, in the, in that, 58 00:02:39.135 --> 00:02:40.175 in the, in that scenario. 59 00:02:40.485 --> 00:02:44.095 Okay? Um, so, um, one of things that you're going to see is 60 00:02:44.095 --> 00:02:47.175 that, uh, you know, I'm going to present several, uh, 61 00:02:47.235 --> 00:02:51.295 in my slides that have lots of references, um, for you 62 00:02:51.295 --> 00:02:54.335 to explore later on, some no tech, you know, trains 63 00:02:54.335 --> 00:02:56.215 and some additional things that we can see. 64 00:02:56.635 --> 00:02:59.565 Um, so hopefully you'll find some additional resources 65 00:02:59.625 --> 00:03:03.005 to explore further after we are done. 66 00:03:03.505 --> 00:03:07.525 Uh, my target's about 40 minutes or so of me talking, 67 00:03:07.625 --> 00:03:10.605 and then I can answer all, any, any questions remaining. 68 00:03:10.985 --> 00:03:13.605 Um, or, but, you know, in the end, 69 00:03:13.625 --> 00:03:17.205 before passing back to Kim, uh, for, for the wrap. 70 00:03:17.865 --> 00:03:20.765 So let's start with this idea about, uh, you know, 71 00:03:21.175 --> 00:03:24.925 where does all this, this AI stuff, uh, come from, right? 72 00:03:25.465 --> 00:03:28.565 And, and really if you think about AI, I have been working 73 00:03:28.675 --> 00:03:29.765 with AI, I, I, 74 00:03:29.965 --> 00:03:33.325 I did like neural networks like back in the 90s, uh, 75 00:03:33.355 --> 00:03:35.565 when things were just much more like, you know, 76 00:03:35.565 --> 00:03:36.765 experimental, right? 77 00:03:36.765 --> 00:03:40.365 Nobody really knew what to do with those things 78 00:03:40.675 --> 00:03:44.845 because there are several pieces that were missing. 79 00:03:45.665 --> 00:03:48.725 And what we have seen in the last, uh, decade 80 00:03:48.905 --> 00:03:50.765 or so, a little bit more than that, is 81 00:03:50.765 --> 00:03:55.645 that essentially these three factors that really have led AI 82 00:03:55.905 --> 00:03:59.685 to this, this prominent position it has right now where a lot 83 00:03:59.685 --> 00:04:01.285 of people are excited about it, right? 84 00:04:01.825 --> 00:04:03.205 And the three factors essentially 85 00:04:03.205 --> 00:04:04.245 is the presence of big data. 86 00:04:04.425 --> 00:04:05.885 We have much, much more data. 87 00:04:05.935 --> 00:04:08.845 We're collecting more data in unprecedented levels, right? 88 00:04:09.545 --> 00:04:10.965 Any, anyone of us 89 00:04:10.995 --> 00:04:13.405 with a cell phone is constantly collecting data. 90 00:04:14.025 --> 00:04:16.285 Uh, we have much more computing power, right. 91 00:04:16.345 --> 00:04:18.925 One of the, the biggest companies in the world right now is 92 00:04:19.105 --> 00:04:21.125 Nvidia, uh, you know, for those of us 93 00:04:21.125 --> 00:04:22.885 that were gamers, right? 94 00:04:23.205 --> 00:04:25.245 Remember Nvidia for the graphic, you know, uh, 95 00:04:25.245 --> 00:04:26.405 processing units, right? 96 00:04:26.425 --> 00:04:28.325 And now it's essentially an AI company. 97 00:04:28.825 --> 00:04:31.885 So the computing power is advancing very, very quickly. 98 00:04:32.425 --> 00:04:35.525 And then you have the algorithms, uh, what exactly is, 99 00:04:35.525 --> 00:04:39.925 is is essentially mobilizing our ability to do things 100 00:04:39.925 --> 00:04:41.045 with AI, right? 101 00:04:41.045 --> 00:04:44.165 These algorithms, were all, a lot of them were kind 102 00:04:44.165 --> 00:04:47.525 of theoretical until we start really have the computing 103 00:04:47.525 --> 00:04:50.365 power and big data to actually, you know, train them 104 00:04:50.505 --> 00:04:52.845 and to start using them in, in real life. 105 00:04:53.345 --> 00:04:55.125 But the algorithms have been really 106 00:04:55.315 --> 00:04:57.165 advancing, uh, quite a lot. 107 00:04:58.815 --> 00:05:01.235 And, you know, again, just a recap, right? 108 00:05:01.455 --> 00:05:05.075 We have these two kind of main categories. 109 00:05:05.775 --> 00:05:07.955 Uh, if you think about AI, right? 110 00:05:08.125 --> 00:05:10.315 Algorithms, so you, you have the more 111 00:05:10.385 --> 00:05:11.715 statistical machine learning. 112 00:05:12.215 --> 00:05:13.875 We have things like supervised learning, 113 00:05:13.975 --> 00:05:16.355 unsupervised learning, like no cluster analysis. 114 00:05:16.895 --> 00:05:19.555 We have traditional reinforcement learning, um, 115 00:05:19.565 --> 00:05:21.315 which I'll talk a little bit about other types 116 00:05:21.315 --> 00:05:22.715 of reinforcement learning in a second. 117 00:05:23.135 --> 00:05:25.155 Um, and then you have all the deep learning, 118 00:05:25.255 --> 00:05:27.795 the more traditional deep learning approaches, right? 119 00:05:27.795 --> 00:05:31.555 Convolutional networks for, um, uh, image 120 00:05:31.655 --> 00:05:36.035 and video processing, uh, RNNs, GANs, diffusion models, 121 00:05:36.415 --> 00:05:39.075 things that have been around for, for some time 122 00:05:39.095 --> 00:05:40.755 and they have proved to be effective. 123 00:05:41.415 --> 00:05:45.995 Um, but they, they're kind of not very sexy, right? 124 00:05:46.315 --> 00:05:49.435 I mean, we have a lot of sexy stuff that's coming right now. 125 00:05:50.095 --> 00:05:53.035 Um, but one thing that I want to highlight to everybody is 126 00:05:53.035 --> 00:05:54.995 that those things work really well. 127 00:05:55.655 --> 00:05:58.805 Um, you know, the traditional non-sexy stuff, right? 128 00:05:58.865 --> 00:06:00.725 It still works extremely well 129 00:06:01.265 --> 00:06:04.685 and are widely used, uh, in many organizations. 130 00:06:05.305 --> 00:06:08.845 It is just that if you see where you are right now, we, 131 00:06:08.845 --> 00:06:11.165 we have this whole idea about transformers, right? 132 00:06:11.665 --> 00:06:16.005 And, and transformers are weak, have kind of revolutionized 133 00:06:16.965 --> 00:06:19.725 a lot of the stuff that, that are coming in, 134 00:06:19.725 --> 00:06:22.525 especially in the last four or five years. 135 00:06:22.645 --> 00:06:24.605 I mean, transformers were, you know, uh, 136 00:06:24.635 --> 00:06:26.005 developed before that. 137 00:06:26.185 --> 00:06:28.925 But the practical implications of that, right? 138 00:06:28.945 --> 00:06:33.005 And transformers in, in the very baseline are really this, 139 00:06:33.115 --> 00:06:36.765 this multidimensional matrices that are looking 140 00:06:36.825 --> 00:06:37.965 for correlations, right? 141 00:06:38.025 --> 00:06:41.885 If you define your basic large language model, essentially, 142 00:06:41.945 --> 00:06:43.645 is a model that is anticipating 143 00:06:43.645 --> 00:06:45.565 what the next word is, right? 144 00:06:45.825 --> 00:06:47.885 We know it's more powerful than that at this point, 145 00:06:48.265 --> 00:06:50.245 but that's how it kind of is started. 146 00:06:51.105 --> 00:06:54.165 Um, and the main idea here is that you have more parameters, 147 00:06:54.475 --> 00:06:57.645 more training, then the outcomes are better 148 00:06:58.235 --> 00:07:01.645 than if you have fewer of parameters and data, right? 149 00:07:02.105 --> 00:07:05.485 So the baseline of large language model are this type of, 150 00:07:05.505 --> 00:07:07.805 uh, on your network, this type of deep learning network, 151 00:07:07.805 --> 00:07:09.645 which are, you know, transformers. 152 00:07:10.425 --> 00:07:13.605 And again, they evolved, essentially dealing with text, 153 00:07:14.305 --> 00:07:16.885 but now they have been, um, essentially captured 154 00:07:17.105 --> 00:07:20.525 and extended to do with a variety of different types 155 00:07:20.585 --> 00:07:23.685 of problems, which makes our life very exciting, right? 156 00:07:23.705 --> 00:07:25.205 Is the, the, the sexy stuff, right? 157 00:07:25.205 --> 00:07:28.565 That's all the things that we have seen around in terms 158 00:07:28.585 --> 00:07:31.325 of new developments, advertisements, releases 159 00:07:31.325 --> 00:07:32.725 of new models, and so on and so forth. 160 00:07:34.025 --> 00:07:35.085 So that's where you are, 161 00:07:35.175 --> 00:07:37.685 we're in a very exciting time, right? 162 00:07:37.855 --> 00:07:39.885 We're very, uh, in a very good moment 163 00:07:39.885 --> 00:07:42.245 because there's a lot going on, right, 164 00:07:42.865 --> 00:07:44.365 um, in terms of where it came from. 165 00:07:44.385 --> 00:07:46.605 But now let's talk a little bit about where we are. 166 00:07:46.825 --> 00:07:49.965 What's the current state, uh, of these technologies 167 00:07:50.585 --> 00:07:51.965 in a very high level, right? 168 00:07:51.965 --> 00:07:54.805 We're not going to be, you know, doing demonstrations of, 169 00:07:54.825 --> 00:07:55.925 you know, transformers 170 00:07:56.665 --> 00:07:59.565 and post-training approaches, et cetera. 171 00:07:59.975 --> 00:08:01.405 We're going to kinda see, you know, 172 00:08:01.855 --> 00:08:04.725 where are the capabilities from a business perspective. 173 00:08:04.745 --> 00:08:08.125 And that's pretty much the main focus of this conversation, 174 00:08:08.865 --> 00:08:10.805 is we're looking at everything from the business lenses. 175 00:08:11.585 --> 00:08:14.725 So if you look in terms of where, you know, 176 00:08:14.755 --> 00:08:18.325 what are the latest and greatest in the AI investments. 177 00:08:18.345 --> 00:08:23.125 So McKinsey runs this, uh, annual, um, uh, survey, right? 178 00:08:23.125 --> 00:08:26.285 Looking at the macro, uh, essentially technology trains. 179 00:08:26.905 --> 00:08:28.845 And this was released, you know, um, 180 00:08:29.225 --> 00:08:30.525 couple of weeks ago, I guess. 181 00:08:31.225 --> 00:08:33.245 And, uh, what they're looking is, 182 00:08:33.535 --> 00:08:37.085 there are essentially these two kind of exciting areas. 183 00:08:37.265 --> 00:08:39.885 One is the whole of artificial intelligence, right? 184 00:08:40.425 --> 00:08:44.885 And you see essentially huge investments, a lot 185 00:08:44.885 --> 00:08:46.285 of job postings, right? 186 00:08:46.285 --> 00:08:50.285 That's all to have seen AI AI, AI, right? All the time. 187 00:08:51.265 --> 00:08:52.805 The new kid in the block, right? 188 00:08:52.945 --> 00:08:54.405 The exciting new thing, or, 189 00:08:54.425 --> 00:08:56.845 or maybe if we're more of a skeptical, right, 190 00:08:56.905 --> 00:09:00.565 the, the shiny new object is agentic AI. We're going 191 00:09:00.565 --> 00:09:02.325 to talk about agentic AI in a second. 192 00:09:02.825 --> 00:09:05.005 But this is where we start to see a lot, 193 00:09:05.005 --> 00:09:06.765 some investment come into life, 194 00:09:07.385 --> 00:09:09.285 and we start to see a huge expansion in terms 195 00:09:09.285 --> 00:09:10.325 of job postings. 196 00:09:10.325 --> 00:09:14.165 People are interested in really developing this agentic AI, 197 00:09:14.705 --> 00:09:17.205 uh, uh, processes and models to that. 198 00:09:17.225 --> 00:09:19.205 So this is where we are right now. 199 00:09:20.845 --> 00:09:23.425 Now, this is where we were just very recently, right? 200 00:09:23.485 --> 00:09:25.225 So I just want to make a comparison 201 00:09:25.775 --> 00:09:29.945 because if you look at the size of the investment in terms 202 00:09:29.945 --> 00:09:34.385 around $124 billion, um, that's kind 203 00:09:34.385 --> 00:09:38.305 of remains somewhat constant from 2024 to 2025. 204 00:09:39.005 --> 00:09:42.065 Um, but the numbers here a little bit spread in different 205 00:09:42.065 --> 00:09:43.505 directions because part is 206 00:09:43.505 --> 00:09:47.145 because, uh, agentic AI, it really has took, has taken off, 207 00:09:47.685 --> 00:09:50.865 um, in the tail end of 2024, right? 208 00:09:50.905 --> 00:09:54.225 That's when you start see a lot of, uh, agents 209 00:09:54.285 --> 00:09:55.465 and agents, right, 210 00:09:55.525 --> 00:09:56.825 and starting to come to life. 211 00:09:57.405 --> 00:10:00.625 But this is where we see right now, you know, uh, in terms 212 00:10:00.625 --> 00:10:03.665 of the distribution under that AI umbrella, right? 213 00:10:03.845 --> 00:10:05.105 The, a big AI umbrella 214 00:10:05.125 --> 00:10:09.265 of $124 billion investment, this is where it's kind 215 00:10:09.265 --> 00:10:10.265 of segmented, right? 216 00:10:10.865 --> 00:10:13.785 Some of it is generative AI, some 217 00:10:13.785 --> 00:10:16.185 of it is essentially machine learning, right? 218 00:10:16.185 --> 00:10:17.665 The more traditional approach 219 00:10:17.685 --> 00:10:19.665 for machine learning in industry, right, 220 00:10:20.065 --> 00:10:21.745 industry 4.0 and things like that. 221 00:10:22.365 --> 00:10:26.345 And a lot of it is just your general applied AI, right? 222 00:10:26.765 --> 00:10:28.265 Things that have been reliable 223 00:10:28.365 --> 00:10:30.505 before, things that have been working, right? 224 00:10:30.575 --> 00:10:34.305 Some machine learning, some algorithms that have been used 225 00:10:34.405 --> 00:10:38.105 to generate a lot of positive results, right? 226 00:10:38.565 --> 00:10:42.985 So, um, again, the main takeaway here would be loss of, 227 00:10:43.085 --> 00:10:46.785 of investment, loss of interest, um, 228 00:10:47.005 --> 00:10:49.625 but a little bit of variety in terms of 229 00:10:50.285 --> 00:10:51.665 what's exciting, right? 230 00:10:51.725 --> 00:10:56.265 And what's exciting in the, in the cycles of a lot of hype 231 00:10:57.125 --> 00:11:00.265 things should get a lot of, uh, focus, right? 232 00:11:00.605 --> 00:11:03.105 So you see applied AI huge, 233 00:11:04.125 --> 00:11:06.745 but essentially losing a little bit of, 234 00:11:06.745 --> 00:11:08.025 even like job postings, 235 00:11:08.205 --> 00:11:11.345 while GenAI you start to get more job postings, right? 236 00:11:11.405 --> 00:11:12.665 So it's a cycle 237 00:11:12.695 --> 00:11:16.105 that we have gone over many times in technology when 238 00:11:16.105 --> 00:11:19.465 sometimes, like the newest things gets a lot of coverage, 239 00:11:19.525 --> 00:11:21.425 and the things that have been working quite well 240 00:11:21.425 --> 00:11:25.745 before eventually do not get that same level of, 241 00:11:25.845 --> 00:11:26.905 of press, right, 242 00:11:27.005 --> 00:11:28.905 of coverage that others have. 243 00:11:31.025 --> 00:11:32.845 Now going beyond investment, 244 00:11:33.065 --> 00:11:35.965 so there's a couple things that happen that kind of, uh, 245 00:11:37.035 --> 00:11:39.845 kind of exciting and just again, as a, as a level, you know, 246 00:11:39.845 --> 00:11:43.045 setting here, I mean, some of you already aware of that, 247 00:11:43.625 --> 00:11:47.045 but want to make sure that we have a good understanding 248 00:11:47.575 --> 00:11:49.245 about what's the current state, right? 249 00:11:49.345 --> 00:11:52.485 And one of the big things that you have right now is data, 250 00:11:53.345 --> 00:11:56.725 is essentially, you know, really looking at the vast amount 251 00:11:56.725 --> 00:12:00.765 of data that we have available, uh, in the world, right? 252 00:12:00.885 --> 00:12:03.005 I mean, those, those, those systems, those models 253 00:12:03.515 --> 00:12:06.325 have pretty much consumed a lot of it, right? 254 00:12:06.625 --> 00:12:10.885 So we're not really generating, um, extreme amount 255 00:12:10.885 --> 00:12:12.645 of new data versus 256 00:12:12.955 --> 00:12:16.205 what is the systems, the AI systems, are consuming, right? 257 00:12:16.825 --> 00:12:21.085 Um, so the publicly available data is start to get to, 258 00:12:21.185 --> 00:12:24.285 to tap out in terms of that availability. 259 00:12:25.025 --> 00:12:26.805 Um, we start to explore more 260 00:12:26.805 --> 00:12:29.765 and more, uh, creating data, synthetic data, 261 00:12:29.945 --> 00:12:32.605 and that is another word, synthetic data essentially is 262 00:12:33.225 --> 00:12:37.645 systems generating data to eventually train systems, right? 263 00:12:38.145 --> 00:12:40.965 And as you can imagine, that is, there is a lot 264 00:12:40.965 --> 00:12:42.165 of opinions about that. 265 00:12:42.275 --> 00:12:44.645 They have to be extremely careful about how 266 00:12:44.665 --> 00:12:46.045 to use synthetic data. 267 00:12:46.615 --> 00:12:49.485 There is some really good positive outcomes. 268 00:12:49.485 --> 00:12:52.165 There's a lot of, uh, risks if not going done well. 269 00:12:52.865 --> 00:12:55.005 So, but that's where you are in the data, right? 270 00:12:55.185 --> 00:12:59.885 The other aspect you have is, uh, kind of using a lot of, 271 00:12:59.985 --> 00:13:03.405 um, uh, proprietary, uh, data, right, 272 00:13:03.645 --> 00:13:06.765 internal data to augment the capabilities 273 00:13:07.505 --> 00:13:09.125 of our, the public models. 274 00:13:09.185 --> 00:13:11.845 So instead of, you know, using public models at the time, 275 00:13:12.505 --> 00:13:15.085 you are combine those public models 276 00:13:15.085 --> 00:13:18.525 with things like your RAG right, that are kind of associated 277 00:13:18.525 --> 00:13:20.005 with the, with the big models, 278 00:13:20.225 --> 00:13:22.725 but to be very customized, to be much more narrow 279 00:13:23.305 --> 00:13:26.845 and in many cases, much more helpful for organizations. 280 00:13:27.065 --> 00:13:30.885 So I was attending, you know, a presentation from the, um, 281 00:13:31.065 --> 00:13:34.645 uh, university of Pittsburgh, um, AI, dean 282 00:13:34.665 --> 00:13:37.445 or vice dean there, and, uh, the medical school, 283 00:13:37.545 --> 00:13:40.365 And they're using a lot of, trying to start using synthetic data 284 00:13:40.425 --> 00:13:43.925 to help diagnose, um, new, you know, diseases, right? 285 00:13:43.945 --> 00:13:47.245 And train doctors about how to, to, how to diagnose the, 286 00:13:47.245 --> 00:13:48.845 uh, diseases in, in, in, you know, 287 00:13:48.845 --> 00:13:50.125 in the medical school there. 288 00:13:50.825 --> 00:13:52.565 So, very, a lot of potential there. 289 00:13:53.825 --> 00:13:57.805 The other interesting thing is that it, it is really hard 290 00:13:57.805 --> 00:14:02.645 to assess at times how good large language models, right, 291 00:14:02.725 --> 00:14:05.285 these new models are currently are in, in 292 00:14:05.305 --> 00:14:06.525 how far they can go. 293 00:14:07.305 --> 00:14:11.005 So we are, uh, you know, the press, right? 294 00:14:11.035 --> 00:14:12.605 This is a very timely event 295 00:14:12.605 --> 00:14:15.405 because the press is indicating that, uh, 296 00:14:15.405 --> 00:14:19.165 ChatGPT five might be released in next week, right? 297 00:14:19.825 --> 00:14:22.405 Um, so in a sense, it looks like, oh, 298 00:14:22.625 --> 00:14:25.045 has been quite some time since we had a new release, 299 00:14:25.345 --> 00:14:29.405 but we have always a lot of evolution in ChatGPT, 300 00:14:29.405 --> 00:14:32.045 and of course, all the different competitors like Gemini 301 00:14:32.185 --> 00:14:33.445 and Llama and 302 00:14:33.745 --> 00:14:36.245 and Claude, they all have been evolving, 303 00:14:36.515 --> 00:14:38.885 even though they might not have like a definitive, 304 00:14:38.885 --> 00:14:40.605 like a new version, right, 305 00:14:40.745 --> 00:14:42.445 an X point something. 306 00:14:43.025 --> 00:14:46.525 Um, so there always these new models, uh, 307 00:14:46.845 --> 00:14:49.605 becoming available and they tend to be very powerful. 308 00:14:50.105 --> 00:14:53.965 So one of the things that, um, that we are seeing is 309 00:14:53.965 --> 00:14:55.765 that now some regular tests, right, 310 00:14:55.885 --> 00:14:57.005 I dunno if you remember this, 311 00:14:57.005 --> 00:14:59.885 but two years ago, like by November 2022, 312 00:14:59.915 --> 00:15:03.485 when ChatGPT really became public, um, we saw a lot 313 00:15:03.485 --> 00:15:05.725 of discussion about, hey, uh, you know, um, 314 00:15:07.025 --> 00:15:09.245 AI can pass the bar exam, right? 315 00:15:09.425 --> 00:15:12.605 Or AI can run, you know, an essay and, and, 316 00:15:12.745 --> 00:15:15.965 and do something, uh, about like college degrees, right, 317 00:15:15.965 --> 00:15:19.485 college credits. Well, for now, having AI 318 00:15:20.265 --> 00:15:23.565 really sometimes doing qualifying exams 319 00:15:23.565 --> 00:15:25.525 for PhDs successfully, right? 320 00:15:26.105 --> 00:15:28.805 Um, we are seeing, for example, a couple 321 00:15:28.805 --> 00:15:30.165 of weeks ago, right, 322 00:15:30.225 --> 00:15:34.445 uh, Gemini got, uh, the gold medal in, um, 323 00:15:35.545 --> 00:15:38.085 in the, in the, in the international math Olympics, 324 00:15:38.665 --> 00:15:42.085 um, and had to use a lot of creativity, a lot 325 00:15:42.085 --> 00:15:44.285 of creating new solutions for problems. 326 00:15:44.745 --> 00:15:47.565 And that's a very, very tough questions, right? 327 00:15:47.685 --> 00:15:49.205 I mean, only a small percentage 328 00:15:49.205 --> 00:15:52.445 of people can actually do those things, certainly not me. 329 00:15:52.825 --> 00:15:54.165 Um, and, uh, 330 00:15:54.185 --> 00:15:56.485 and essentially it is even to understand 331 00:15:56.705 --> 00:15:58.445 how far these models have come, 332 00:15:58.475 --> 00:16:00.125 have become a little bit complicated 333 00:16:00.125 --> 00:16:01.685 because it's hard to test, right? 334 00:16:01.685 --> 00:16:04.565 It's hard to test for intelligence, it's hard to test 335 00:16:04.685 --> 00:16:06.085 for capabilities, um, 336 00:16:06.115 --> 00:16:08.565 because again, they keep evolving, okay? 337 00:16:09.425 --> 00:16:10.885 Now, one of the biggest things 338 00:16:10.885 --> 00:16:13.605 that you have right now is this idea about 339 00:16:14.145 --> 00:16:16.245 AI agents, right? 340 00:16:16.385 --> 00:16:21.125 Uh, and AI agents are something relatively new, 341 00:16:21.225 --> 00:16:24.685 and the big kind of transformation is that 342 00:16:25.305 --> 00:16:28.685 we have started training those models, right, 343 00:16:28.685 --> 00:16:30.205 those, those large language models, 344 00:16:30.785 --> 00:16:32.445 in a different approach, right? 345 00:16:32.465 --> 00:16:36.045 So instead of doing more of the imitation learning, we start 346 00:16:36.045 --> 00:16:37.805 to use more and more post-training 347 00:16:37.865 --> 00:16:39.285 and reinforcement learning. 348 00:16:39.345 --> 00:16:42.605 And that's something that really changed quite a bit, 349 00:16:43.225 --> 00:16:45.165 the capabilities of those models, right? 350 00:16:45.195 --> 00:16:47.765 Because in many cases, in previous attempts, 351 00:16:47.765 --> 00:16:50.605 like in the early history, like by early history, I mean, 352 00:16:50.605 --> 00:16:52.285 like one year and a half ago or so, right, 353 00:16:52.865 --> 00:16:55.765 you saw essentially models created making mistakes 354 00:16:55.865 --> 00:16:59.045 and compounding those mistakes over time, right? 355 00:16:59.505 --> 00:17:02.205 So, um, you know, kind of you're running, you know, 356 00:17:02.225 --> 00:17:05.085 if you are, if you're driving something right in a race, 357 00:17:05.585 --> 00:17:08.365 and essentially you're moving things around, so instead 358 00:17:08.365 --> 00:17:11.965 of trying to adjust back, you know, making a mistake, you go 359 00:17:11.965 --> 00:17:13.285 to the side of the road, right? 360 00:17:13.545 --> 00:17:15.525 It would let more mistakes and more mistakes, 361 00:17:15.525 --> 00:17:18.605 and a sudden the car would just fall from the cliff, right? 362 00:17:19.915 --> 00:17:23.605 That has changed because you are training models, uh, 363 00:17:24.065 --> 00:17:25.925 in a different way now, right? 364 00:17:25.945 --> 00:17:27.885 If a lot of post-training, if a lot of, uh, 365 00:17:28.205 --> 00:17:30.845 reinforcement learning, and you're using actually, uh, 366 00:17:31.225 --> 00:17:33.605 models to validate models, right? 367 00:17:33.605 --> 00:17:35.445 To see if the accuracy of the, 368 00:17:35.675 --> 00:17:39.005 what the models generating is actually good enough to, 369 00:17:39.025 --> 00:17:40.165 to pass master, right? 370 00:17:40.745 --> 00:17:42.925 So the consequences of that is 371 00:17:42.925 --> 00:17:45.405 that now we have essentially some solutions 372 00:17:45.955 --> 00:17:49.005 that can take more complex instructions, right? 373 00:17:49.110 --> 00:17:52.125 In the past, you, the more complex, there is a certain point 374 00:17:52.125 --> 00:17:55.405 where the more complex you became in terms of your prompts, 375 00:17:55.405 --> 00:17:59.125 in terms of your, your requests, the more likely the model 376 00:17:59.645 --> 00:18:02.165 navigate out of the acceptable range. 377 00:18:02.375 --> 00:18:05.325 Right? Now, our seeing models that are, that are, 378 00:18:05.785 --> 00:18:07.845 can keep track, right? 379 00:18:07.955 --> 00:18:12.735 They require more, um, more ideas, um, um, 380 00:18:13.075 --> 00:18:16.375 you know, to get, to get, uh, you know, some better 381 00:18:17.045 --> 00:18:20.895 solutions, better, uh, outcomes in that, okay? 382 00:18:21.315 --> 00:18:22.815 And the thing is that because of that, 383 00:18:22.815 --> 00:18:26.215 there's a potential opening, you know, in terms of a lot 384 00:18:26.215 --> 00:18:29.375 of social companies are investing on that, a lot 385 00:18:29.375 --> 00:18:32.895 of app applications with ServiceNow, Salesforce, 386 00:18:33.395 --> 00:18:38.215 and really going to this TKI approach, which is, has a lot 387 00:18:38.215 --> 00:18:39.535 of potential to execute. 388 00:18:40.435 --> 00:18:42.655 And of course, the final thing that I want 389 00:18:42.655 --> 00:18:46.775 to talk about in terms of where we are right now is we start 390 00:18:46.775 --> 00:18:49.695 to see a variety of different developments, right? 391 00:18:49.755 --> 00:18:51.695 We have seen deep seek, right? 392 00:18:51.695 --> 00:18:53.895 Deep seek seems to be around for quite some time, 393 00:18:53.955 --> 00:18:56.455 but was actually, you know, couple of months ago, right? 394 00:18:56.915 --> 00:19:00.175 Uh, Quan and other models that again, are looking for these 395 00:19:00.845 --> 00:19:04.175 different approaches, different ways of training 396 00:19:04.325 --> 00:19:08.095 that might be more cost effective in generate results 397 00:19:08.095 --> 00:19:09.935 that are very, very positive, right? 398 00:19:10.435 --> 00:19:14.295 Um, so, so there's a lot of things happening in the realm 399 00:19:14.295 --> 00:19:17.175 of algorithms, the realm of development of new techniques. 400 00:19:17.715 --> 00:19:20.295 Um, it's a very vibrant area right now. 401 00:19:20.635 --> 00:19:24.135 And, uh, and a lot of this potential, um, in these areas 402 00:19:24.805 --> 00:19:27.135 that can lead to better outcomes, right? 403 00:19:27.195 --> 00:19:29.535 And, you know, I, and even more, right? 404 00:19:29.935 --> 00:19:32.015 I was in Korea a couple of months ago talking 405 00:19:32.015 --> 00:19:33.335 to some AI companies there, 406 00:19:33.835 --> 00:19:36.455 and they're trying to say, Hey, how can you actually really, 407 00:19:36.545 --> 00:19:40.255 truly, uh, implement reasoning, right? 408 00:19:40.355 --> 00:19:44.695 Versus this imitation of reason that we have right now by 409 00:19:45.255 --> 00:19:48.095 creating, you know, uh, induction deduction abductions 410 00:19:48.345 --> 00:19:49.855 mechanisms in these models, right? 411 00:19:49.855 --> 00:19:51.855 So there's a lot of things happening right now, 412 00:19:52.315 --> 00:19:55.375 and it's a lot of exciting, uh, new developments in general. 413 00:19:55.805 --> 00:19:58.255 Okay? So I know the questions are coming, 414 00:19:58.355 --> 00:20:00.375 and please keep them coming, you know, try to address 415 00:20:00.435 --> 00:20:03.815 as much as possible, um, um, for that. 416 00:20:04.755 --> 00:20:08.495 Yes. Uh, Caitlin, thank you so much. I appreciate that. 417 00:20:09.015 --> 00:20:11.455 I will get, uh, to some of the questions in the end. 418 00:20:11.655 --> 00:20:14.015 I appreciate that. So let's talk about the risks. 419 00:20:14.155 --> 00:20:17.175 And actually, Lin, I might, Kathleen, I might actually get, 420 00:20:17.275 --> 00:20:19.135 uh, to one of the questions right now. 421 00:20:20.575 --> 00:20:25.155 Um, so let us, uh, 422 00:20:25.385 --> 00:20:28.995 talk a about, a little bit in terms of the ethics 423 00:20:29.615 --> 00:20:30.795 and regulations, right? 424 00:20:31.495 --> 00:20:35.115 And, um, the main idea here is, um, 425 00:20:36.535 --> 00:20:38.585 what is ethical and fair, right? 426 00:20:38.605 --> 00:20:41.705 In the world of ai? So, uh, just popping up in the, 427 00:20:41.705 --> 00:20:42.825 in the q and a, right? 428 00:20:42.825 --> 00:20:47.285 To see how do you actually ethically source data, right? 429 00:20:47.865 --> 00:20:52.285 Um, well, it's, if you look at all the, the, uh, 430 00:20:52.285 --> 00:20:55.645 the progression of sourcing data, you can see 431 00:20:55.645 --> 00:20:58.325 that there's a lot of statutes have a lot of agreements 432 00:20:59.115 --> 00:21:02.245 between, um, you know, the big tech companies 433 00:21:02.505 --> 00:21:04.165 and data providers 434 00:21:04.385 --> 00:21:06.325 and new sources, et cetera, 435 00:21:06.385 --> 00:21:07.885 to make sure that the data is used. 436 00:21:07.905 --> 00:21:09.805 But there is still a lot of a gray area 437 00:21:10.375 --> 00:21:13.885 where data is not necessarily, you know, being collected 438 00:21:13.885 --> 00:21:17.525 with the permission of those, uh, creators. 439 00:21:17.985 --> 00:21:19.885 And there's a huge discussion about that. 440 00:21:19.885 --> 00:21:23.485 There's a lot of legal, uh, issues around that right now. 441 00:21:24.065 --> 00:21:28.485 But even beyond that, right? What is ethical, right? 442 00:21:28.555 --> 00:21:31.925 What I mean, if you ethics is of course is a very, um, 443 00:21:32.555 --> 00:21:34.325 complex problem, right? 444 00:21:34.385 --> 00:21:36.885 If you ask, you know, 100 of us here 445 00:21:37.145 --> 00:21:40.605 and try to understand, you know, how many people we are 446 00:21:41.035 --> 00:21:45.765 that are, you know, uh, different approaches of ethics, um, 447 00:21:45.945 --> 00:21:49.845 we might have 250 different answers, right? 448 00:21:49.845 --> 00:21:53.565 Because are we talking about fairness in the processes, 449 00:21:54.125 --> 00:21:55.125 fairness in the outcomes? 450 00:21:55.785 --> 00:21:59.325 So expecting that reflection of ethics 451 00:21:59.905 --> 00:22:01.765 in systems, right? 452 00:22:01.785 --> 00:22:03.725 It is complex by its own nature. 453 00:22:04.345 --> 00:22:05.405 Uh, what you start 454 00:22:05.405 --> 00:22:07.645 to see right now is a little bit more regulations. 455 00:22:07.785 --> 00:22:09.645 The AI has more regulations. 456 00:22:09.645 --> 00:22:11.925 Google just, uh, had an agreement with ai, 457 00:22:11.955 --> 00:22:16.845 with the European Union to, um, to try to, you know, uh, 458 00:22:17.125 --> 00:22:18.805 navigate their regulations now. 459 00:22:19.385 --> 00:22:24.365 Um, but even then, how enforceable it is, right? 460 00:22:24.665 --> 00:22:28.085 How complex that, um, that might be 461 00:22:28.085 --> 00:22:29.925 to actually execute those regulations. 462 00:22:30.305 --> 00:22:34.765 So this still very, very much, um, um, undecided about 463 00:22:34.785 --> 00:22:36.565 how regulations are going to evolve. 464 00:22:37.145 --> 00:22:40.965 And if we're creating systems to serve customers, right? 465 00:22:41.075 --> 00:22:42.765 That becomes a problem, of course, 466 00:22:42.765 --> 00:22:46.085 because what's fair, what's regulated, what's legal, 467 00:22:47.675 --> 00:22:49.165 then you have biases, right? 468 00:22:49.225 --> 00:22:51.685 So nothing that you have are better than, 469 00:22:51.685 --> 00:22:54.085 than you should be, but biases are 470 00:22:55.055 --> 00:22:56.845 still very present in terms of 471 00:22:56.985 --> 00:23:01.485 how exactly is our data source, how exactly, um, you know, 472 00:23:01.945 --> 00:23:04.845 are we, uh, eliminating some biases 473 00:23:05.035 --> 00:23:07.965 that are in the data sets, biases in the selection 474 00:23:07.965 --> 00:23:12.325 of models, biases in the outcomes, um, that is still a lot 475 00:23:12.345 --> 00:23:13.445 to be discovered 476 00:23:13.445 --> 00:23:16.645 and discussed there, where much better dealing 477 00:23:16.645 --> 00:23:18.405 with biases than before, right? 478 00:23:18.545 --> 00:23:21.685 We are much more attentive to what the potential biases are, 479 00:23:22.185 --> 00:23:24.885 but we have to be very vigilant about 480 00:23:24.885 --> 00:23:26.125 that, uh, all the time. 481 00:23:27.485 --> 00:23:32.265 The other issue that we have seen is, you know, the lack 482 00:23:32.265 --> 00:23:34.065 of explainability at times, right? 483 00:23:34.325 --> 00:23:36.545 And the fact that as humans, 484 00:23:37.435 --> 00:23:39.425 we're not built, right? 485 00:23:39.525 --> 00:23:44.025 Our minds are not built to keep essentially monitoring, uh, 486 00:23:44.285 --> 00:23:46.045 AI all the time, right? 487 00:23:46.465 --> 00:23:49.165 But just not, eventually when you start seeing things 488 00:23:49.235 --> 00:23:51.045 working permanently, right? 489 00:23:51.045 --> 00:23:54.845 In a regular basis, we, we just say web, it's working. 490 00:23:54.995 --> 00:23:56.205 Know that 100 times 491 00:23:56.305 --> 00:23:58.725 before, it certainly is going to work in the future, 492 00:23:59.425 --> 00:24:02.325 and we start to pay less attention to that. 493 00:24:02.905 --> 00:24:04.685 So it's a big risk 494 00:24:04.685 --> 00:24:08.765 because essentially we are not very prepared individual 495 00:24:08.855 --> 00:24:11.645 basis to keep monitoring these outcomes. 496 00:24:12.345 --> 00:24:14.685 And then it starts to run into who is liable, 497 00:24:15.105 --> 00:24:16.645 who is liable, right? 498 00:24:16.665 --> 00:24:18.085 For failure, for oversight. 499 00:24:18.625 --> 00:24:21.925 Um, and we is even, um, responsible 500 00:24:22.005 --> 00:24:23.805 for auditing the outcomes. 501 00:24:24.465 --> 00:24:27.285 So there is an interesting article in our technical talking 502 00:24:27.375 --> 00:24:28.605 about, you know, like 503 00:24:28.605 --> 00:24:31.725 how the legal system might be completely unprepared to deal 504 00:24:31.725 --> 00:24:34.725 with fake, fake, uh, citations, making references 505 00:24:35.265 --> 00:24:38.085 in case law, because you don't know. 506 00:24:38.165 --> 00:24:41.565 I mean, how, who is actually constantly monitoring, right? 507 00:24:41.785 --> 00:24:44.125 Uh, case law that might be made up 508 00:24:44.125 --> 00:24:46.245 because essentially a model came up 509 00:24:46.245 --> 00:24:49.005 with something new was too creative, okay? 510 00:24:49.665 --> 00:24:52.085 Or even things like, uh, you might remember the, 511 00:24:52.105 --> 00:24:55.005 the Mecca Hitler, uh, sometime ago with grok, right? 512 00:24:55.525 --> 00:24:57.085 I mean, that was very blatant. 513 00:24:57.475 --> 00:24:59.965 Everybody was very obviously wrong. 514 00:25:00.345 --> 00:25:02.965 Uh, thank you know, with just no discussion about that. 515 00:25:03.635 --> 00:25:06.445 What happened if that was a little bit more subtle, right? 516 00:25:06.625 --> 00:25:11.125 Or happen if the bias was present but was not so blatant. 517 00:25:11.585 --> 00:25:12.725 How do we monitor that? 518 00:25:12.825 --> 00:25:16.765 How do you make sure that, you know, it is as unbiased, 519 00:25:16.985 --> 00:25:20.605 as impartial as possible in those scenarios? 520 00:25:21.545 --> 00:25:24.005 And that leads us to another situation who is, 521 00:25:24.945 --> 00:25:27.365 how do we even know how those things work, right? 522 00:25:27.675 --> 00:25:30.445 They are not necessarily, so one of the things that we want 523 00:25:30.445 --> 00:25:32.005 to always remember is 524 00:25:32.005 --> 00:25:35.165 that explainability is still an an issue, right? 525 00:25:35.215 --> 00:25:37.085 We're getting better and understanding 526 00:25:37.225 --> 00:25:39.205 how the internal work is of these models, 527 00:25:39.785 --> 00:25:43.085 but there is still a lot of, uh, complexity. 528 00:25:43.345 --> 00:25:47.245 You are not regular coding that you're able to track down 529 00:25:47.825 --> 00:25:50.445 and see exactly why the mistakes took place, right? 530 00:25:50.785 --> 00:25:54.045 We can see changes, Hey, this change led to, you know, 531 00:25:54.045 --> 00:25:57.125 these really big bad outcome, maybe you can revert back, 532 00:25:57.705 --> 00:26:01.085 but it's very hard to pinpoint what exactly went wrong. 533 00:26:01.225 --> 00:26:04.565 You know? Uh, Tropic has done some really good work about 534 00:26:04.565 --> 00:26:07.485 that, and even them, uh, still kind of lacking 535 00:26:07.835 --> 00:26:10.485 that precise understanding about what's the train 536 00:26:10.485 --> 00:26:13.925 of thought in that logic that essentially we understand 537 00:26:13.945 --> 00:26:16.165 how exactly things can go wrong, right? 538 00:26:16.585 --> 00:26:18.325 So it's very hard to debug. 539 00:26:18.395 --> 00:26:21.485 It's very hard to figure out, know how we can fix mistakes. 540 00:26:21.625 --> 00:26:24.605 And because it's hard to anticipate that it's also hard 541 00:26:24.605 --> 00:26:25.805 to catch them, right? 542 00:26:25.835 --> 00:26:28.085 It's hard to figure out what is wrong. 543 00:26:28.185 --> 00:26:29.885 How did you get this particular point? 544 00:26:31.475 --> 00:26:35.615 So why do most projects fail, right? So those are the risks. 545 00:26:35.875 --> 00:26:38.255 Um, and now let's think about the risks in general. 546 00:26:38.395 --> 00:26:41.295 Now, let's look at the risks for organizations, right? 547 00:26:41.595 --> 00:26:44.395 So first of all, we have 548 00:26:44.395 --> 00:26:45.915 to remember there's a lot of hype, right? 549 00:26:45.945 --> 00:26:49.955 This is, uh, um, um, you know, my sequence of, uh, you know, 550 00:26:50.115 --> 00:26:53.595 McKinsey, uh, has this, um, AI, you know, this status 551 00:26:53.735 --> 00:26:54.755 of AI in the world, 552 00:26:55.295 --> 00:26:57.675 and you can see, you know, some cases, again, 553 00:26:57.745 --> 00:26:59.555 many friends at McKinsey, great company. 554 00:26:59.975 --> 00:27:04.195 Um, but you can see in like, hey, 23, right, 555 00:27:04.635 --> 00:27:08.795 breakout year, 24 start to generate value, 25 556 00:27:09.025 --> 00:27:10.555 well, what is everybody, right? 557 00:27:10.855 --> 00:27:12.195 So there's a lot of hype, 558 00:27:12.655 --> 00:27:15.035 and that's something that I want to make sure in this, 559 00:27:15.135 --> 00:27:17.795 in this know, this conversation is that we try 560 00:27:17.795 --> 00:27:20.515 to minimize this, this hype to try 561 00:27:20.535 --> 00:27:23.555 to really be very solid in terms of what is 562 00:27:24.355 --> 00:27:26.995 achievable versus just, you know, trying 563 00:27:27.015 --> 00:27:29.115 to follow the latest trend. 564 00:27:30.135 --> 00:27:34.875 Um, so Infosys is an interesting study, uh, this year. 565 00:27:34.975 --> 00:27:37.195 And essentially they looked at interview hundreds 566 00:27:37.195 --> 00:27:40.555 of companies and interview a bunch of different companies 567 00:27:40.555 --> 00:27:44.355 as well to figure out what are the kind of five factors 568 00:27:45.425 --> 00:27:49.195 that are associated with success, right, 569 00:27:49.415 --> 00:27:51.155 uh, the readiness for AI. 570 00:27:51.615 --> 00:27:54.195 And they came up essentially with strategy, governance, 571 00:27:54.255 --> 00:27:56.235 talent, data, and technology, 572 00:27:56.455 --> 00:27:58.795 and only a tiny portion of companies 573 00:27:59.335 --> 00:28:01.875 are preparing all dimensions right? 574 00:28:02.495 --> 00:28:07.355 Now, the interesting fact is most AI projects fail. 575 00:28:08.215 --> 00:28:10.675 So, you know, we can explain that, right? 576 00:28:10.675 --> 00:28:12.635 To see how many of those, of those 577 00:28:12.735 --> 00:28:16.115 of those factors we are not really, had huge 578 00:28:16.195 --> 00:28:17.275 dominance among them. 579 00:28:17.975 --> 00:28:20.635 And, uh, the consequences that, um, you know, 580 00:28:20.635 --> 00:28:23.435 some estimates say that 80% to 90% 581 00:28:23.435 --> 00:28:26.955 of AI projects never leave the proof of concept, right? 582 00:28:27.785 --> 00:28:31.715 Gartner, um, uh, just did a study a couple of weeks ago, uh, 583 00:28:31.715 --> 00:28:34.555 saying that about 40% of, uh, 584 00:28:34.645 --> 00:28:37.875 agentic AI projects are going nowhere next year, right? 585 00:28:37.875 --> 00:28:42.195 They're going to be canceled. Um, only about 19% of projects 586 00:28:42.415 --> 00:28:45.875 beyond the proof of concept actually achieve expected value. 587 00:28:46.695 --> 00:28:50.995 So if you are in that AI, you know, let's go AI, right, 588 00:28:51.465 --> 00:28:53.955 it's very sobering when you see these numbers 589 00:28:54.145 --> 00:28:57.235 because the reality is that we are far 590 00:28:57.765 --> 00:29:02.155 right from have established practices that to make AI 591 00:29:03.175 --> 00:29:04.315 viable, right, 592 00:29:04.415 --> 00:29:06.875 and valuable for many organizations. 593 00:29:07.495 --> 00:29:08.955 So let's go over a little bit 594 00:29:08.955 --> 00:29:10.315 of them very quickly, you know, right? 595 00:29:10.555 --> 00:29:12.155 I mean, one of the things we have to realize 596 00:29:12.305 --> 00:29:14.795 that most AI projects already start 597 00:29:14.795 --> 00:29:16.195 with a losing proposition 598 00:29:16.505 --> 00:29:19.235 because the organization does not understand 599 00:29:19.305 --> 00:29:20.795 what the technology can do. 600 00:29:21.125 --> 00:29:22.675 There is no AI strategy 601 00:29:22.815 --> 00:29:24.835 and there are no business cases and metrics. 602 00:29:25.735 --> 00:29:29.755 So without that, any IT project, right, require those, 603 00:29:30.345 --> 00:29:33.395 without having that, why exactly are we doing this right? 604 00:29:33.585 --> 00:29:36.475 What is a strategy around that? What, what should we do? 605 00:29:37.145 --> 00:29:39.115 That already puts you in a disadvantage 606 00:29:39.345 --> 00:29:42.805 because what you are trying to achieve, the unachievable, 607 00:29:42.805 --> 00:29:44.405 because there are no targets to fall. 608 00:29:45.305 --> 00:29:47.765 Second part is governance. 609 00:29:48.345 --> 00:29:49.965 People don't understand the risks. 610 00:29:50.395 --> 00:29:52.965 What are the risks associated with AI? 611 00:29:52.965 --> 00:29:55.685 There is no understanding about that. 612 00:29:55.985 --> 00:29:59.665 Um, um, you know, that risks for personal risk, 613 00:29:59.985 --> 00:30:01.745 physical safety, financial performance, 614 00:30:01.895 --> 00:30:04.985 even national security can be affected, right? 615 00:30:05.645 --> 00:30:07.465 So what are we, are the risks, 616 00:30:07.685 --> 00:30:10.465 and if we don't have the proper governance, you know, 617 00:30:10.705 --> 00:30:13.305 structure, those risks are going to propagate 618 00:30:13.305 --> 00:30:15.465 and eventually get you later when you're not really 619 00:30:15.495 --> 00:30:16.705 expecting them anymore. 620 00:30:18.005 --> 00:30:21.225 Now we start to see more talent emerge. 621 00:30:21.645 --> 00:30:23.825 Um, so we start, 622 00:30:23.885 --> 00:30:27.065 but still we have this discussion about way we have 623 00:30:27.065 --> 00:30:29.305 to teach people more AI skills, right? 624 00:30:29.365 --> 00:30:31.625 And like, what, what, what are AI skills, right? 625 00:30:31.725 --> 00:30:32.825 Is it prompt engineering? 626 00:30:32.885 --> 00:30:35.905 Is it, you know, is it understanding the models, right? 627 00:30:36.805 --> 00:30:39.505 And, and what I have seen, I was talking with, uh, 628 00:30:39.505 --> 00:30:41.545 with an executive, um, in, um, 629 00:30:41.885 --> 00:30:45.105 in a large chemistry organization in the Europe, uh, 630 00:30:45.215 --> 00:30:47.025 last week, and they say, well, 631 00:30:47.035 --> 00:30:48.905 we're actually not really hiring many 632 00:30:48.905 --> 00:30:50.065 data scientists anymore. 633 00:30:50.485 --> 00:30:51.625 We need to hire more people who 634 00:30:51.625 --> 00:30:52.865 understand governance, right? 635 00:30:53.165 --> 00:30:56.465 So this patterns of hiring, the patterns of need 636 00:30:56.485 --> 00:30:59.145 for organizations of AI skills, 637 00:30:59.445 --> 00:31:02.585 AI talent, are really changing very dramatically. 638 00:31:02.685 --> 00:31:05.785 And of course, it changes with industries, uh, 639 00:31:05.965 --> 00:31:08.785 but it's also changing in terms of age, in terms 640 00:31:08.785 --> 00:31:09.785 of experimentation. 641 00:31:10.405 --> 00:31:14.145 So, um, um, um, the, um, Associate Press 642 00:31:14.725 --> 00:31:16.065 did a research, uh, 643 00:31:16.365 --> 00:31:20.625 the survey essentially recognized about only 37% of people, 644 00:31:21.365 --> 00:31:24.825 um, have actually used AI for work, right? 645 00:31:25.205 --> 00:31:27.985 We, we see all the hype and see, oh, everybody's using AI. 646 00:31:28.035 --> 00:31:30.065 Maybe they're all using in, in college, 647 00:31:30.165 --> 00:31:31.625 but not work as much. 648 00:31:32.305 --> 00:31:34.585 I was working with a company last week 649 00:31:34.645 --> 00:31:37.465 and they said about the same statistics, about only 40% 650 00:31:37.465 --> 00:31:40.225 of the employees are actually using AI in work. 651 00:31:40.805 --> 00:31:44.185 So a lot of companies who struggle in terms of what is 652 00:31:44.375 --> 00:31:45.825 that, what are those skills? 653 00:31:45.965 --> 00:31:47.625 How do we train people, right? 654 00:31:47.625 --> 00:31:50.825 Should they review skills are necessary to use AI? 655 00:31:51.725 --> 00:31:53.825 And from the employee's perspective, many 656 00:31:53.825 --> 00:31:56.345 of them are waiting for guidance, right? 657 00:31:56.495 --> 00:31:57.825 There's a, a couple of, uh, 658 00:31:58.005 --> 00:32:00.945 new, our new surveys essentially saying, hey, I'm kind 659 00:32:00.945 --> 00:32:03.345 of willing to do it, but I dunno what my limits are. 660 00:32:03.585 --> 00:32:04.865 I dunno what the rewards are. 661 00:32:05.345 --> 00:32:07.425 I dunno what you know, I should be doing, right? 662 00:32:07.605 --> 00:32:10.905 And so leadership in organizations need to step up 663 00:32:11.365 --> 00:32:13.505 and kind of set up what their guard rails are 664 00:32:13.565 --> 00:32:16.945 and what objectives, what the strategy around AI 665 00:32:17.165 --> 00:32:18.345 to be, uh, successful. 666 00:32:18.345 --> 00:32:23.065 Data is the, one 667 00:32:23.065 --> 00:32:24.585 of the biggest things about the, 668 00:32:24.925 --> 00:32:27.785 the challenges in any AI project, right? 669 00:32:28.535 --> 00:32:32.505 What, uh, um, uh, [unintelligible], who is the guys that created the, 670 00:32:32.605 --> 00:32:35.705 uh, uh, Netflix algorithm, the recommendation algorithm, 671 00:32:36.265 --> 00:32:37.905 I was talking with him, um, a couple 672 00:32:37.905 --> 00:32:39.445 of us some last year actually. 673 00:32:39.785 --> 00:32:42.565 And it's like, hey, [unintelligible], he's a consultant 674 00:32:42.705 --> 00:32:45.805 now, he goes to companies and we can't define the problem. 675 00:32:46.705 --> 00:32:48.845 We can't define which model we should use, 676 00:32:49.545 --> 00:32:52.285 but then when you try to actually use the data to solve 677 00:32:52.315 --> 00:32:53.605 that problem, right, 678 00:32:53.715 --> 00:32:55.885 with that model, we don't have data. 679 00:32:56.705 --> 00:32:59.365 And that's something that organizations have really 680 00:32:59.565 --> 00:33:01.165 sidestep, uh, so far. 681 00:33:01.745 --> 00:33:04.365 And they're going to come to buy them 682 00:33:04.365 --> 00:33:06.645 because, you know, you don't have the right data, 683 00:33:06.645 --> 00:33:08.165 don't have the right data governance, 684 00:33:08.165 --> 00:33:09.845 you don't have the right data structure. 685 00:33:10.675 --> 00:33:13.125 That means that, you know, all of a sudden your, 686 00:33:13.125 --> 00:33:14.165 your models don't work, 687 00:33:15.065 --> 00:33:18.285 and all that investment gets wasted, right? 688 00:33:18.425 --> 00:33:20.445 So you, some cases you have to really figure out 689 00:33:20.445 --> 00:33:25.325 what my data status is before even trying to go into AI 690 00:33:25.435 --> 00:33:30.205 because you will not get the results that you're expecting 691 00:33:30.305 --> 00:33:32.765 unless you really truly understand the value 692 00:33:32.785 --> 00:33:34.725 of data in your organization. 693 00:33:36.475 --> 00:33:40.735 Now, um, the interesting thing is that what is emerging, uh, 694 00:33:40.935 --> 00:33:44.215 a lot right now is a little bit more those resources about 695 00:33:44.355 --> 00:33:45.535 the technology, right, 696 00:33:45.555 --> 00:33:47.855 the models. I, I, I got a, I, uh, 697 00:33:48.015 --> 00:33:50.935 I was giggling away if I saw this, uh, this, um, 698 00:33:52.075 --> 00:33:56.135 common reference architecture for, uh, GenAI at scale, 699 00:33:56.285 --> 00:33:58.335 that is a, a publication by McKinsey. 700 00:33:58.445 --> 00:34:03.175 Like, this seems extremely complex, uh, architecture for me, 701 00:34:03.635 --> 00:34:06.135 and believe me, I have been in technology for decades. 702 00:34:06.915 --> 00:34:09.775 Um, so there's nothing necessarily simple, right? 703 00:34:10.315 --> 00:34:13.095 Um, I like the attempt here of, of trying 704 00:34:13.095 --> 00:34:14.735 to create some overall model. 705 00:34:15.435 --> 00:34:19.135 But what we have seen is that from the more applied AI, 706 00:34:19.195 --> 00:34:21.335 the maybe the non-sexy AI, right? 707 00:34:21.715 --> 00:34:24.535 We start to have lots and lots of good models 708 00:34:24.715 --> 00:34:27.855 and good solutions that are within the market. We start 709 00:34:27.875 --> 00:34:32.135 to see more of the merging, uh, of the GenAI in terms 710 00:34:32.155 --> 00:34:34.415 of models, in terms of marketing availability 711 00:34:34.475 --> 00:34:36.575 of those models, such get better and better. 712 00:34:37.675 --> 00:34:40.295 The big issue is that, you know, for organizations now, 713 00:34:40.295 --> 00:34:43.135 they have to orchestrate a lot of those things, right? 714 00:34:43.715 --> 00:34:46.575 And, um, there's a lot of variations in options 715 00:34:46.995 --> 00:34:48.175 and contracts. 716 00:34:48.355 --> 00:34:52.775 Um, tokens. How many tokens are you going to need? 717 00:34:52.775 --> 00:34:55.375 Well, it's hard, right? To kind of anticipate those. 718 00:34:55.955 --> 00:34:59.135 So what the main organization is struggling right now is to 719 00:34:59.375 --> 00:35:00.375 actually establish, right? 720 00:35:00.475 --> 00:35:03.255 How, what's my relationship with vendors, with providers? 721 00:35:03.595 --> 00:35:07.215 How much do I pay for that? What is worth, right, 722 00:35:07.315 --> 00:35:11.375 in terms of the access to these tools. You can pay $250 723 00:35:11.515 --> 00:35:14.055 for individual license in some of the more refined models 724 00:35:14.435 --> 00:35:15.895 or pay nothing, right? 725 00:35:16.665 --> 00:35:18.895 Which, what's the range that you want? 726 00:35:18.895 --> 00:35:21.015 Which, which situations that you want 727 00:35:21.015 --> 00:35:22.975 to apply the super sophisticated model 728 00:35:23.615 --> 00:35:25.055 'cause that's the only one that's going to give 729 00:35:25.125 --> 00:35:28.335 that accuracy that you need versus the, 730 00:35:28.355 --> 00:35:29.535 the cheap ones, right? 731 00:35:29.875 --> 00:35:31.655 So that's the kind of things that, uh, 732 00:35:31.655 --> 00:35:33.415 your organization is struggling. 733 00:35:33.695 --> 00:35:36.295 I was talking again with a executive, uh, a couple 734 00:35:36.295 --> 00:35:38.365 of days ago and like, well, 735 00:35:39.525 --> 00:35:41.325 contracts have become a big issue, right? 736 00:35:41.395 --> 00:35:45.325 Because we don't quite know how much we, 737 00:35:45.465 --> 00:35:46.845 how much we assess, right? 738 00:35:47.315 --> 00:35:50.885 This investment, what's the ROI to establish relationships 739 00:35:50.885 --> 00:35:52.045 with vendors and 740 00:35:52.045 --> 00:35:53.045 how do you manage that 741 00:35:53.325 --> 00:35:57.445 ROI over time? Again, it, it's not unheard of, right? 742 00:35:57.825 --> 00:36:00.365 It is just that we are not quite there 743 00:36:00.365 --> 00:36:03.645 and having really super-established practices right now 744 00:36:04.075 --> 00:36:06.525 that we can rely on versus technologies 745 00:36:06.595 --> 00:36:07.845 that are more established. 746 00:36:08.165 --> 00:36:10.885 Although that is a marketplace for a lot of these solutions, 747 00:36:11.185 --> 00:36:12.365 uh, that's marketplaces 748 00:36:12.365 --> 00:36:13.645 you taking shape in terms 749 00:36:13.765 --> 00:36:16.805 of what's the best arrangement possible for a variety of 750 00:36:16.805 --> 00:36:17.925 different organizations. 751 00:36:19.925 --> 00:36:22.675 Now, let's go to the final part as you wrap 752 00:36:22.695 --> 00:36:23.955 and give more space, right? 753 00:36:24.095 --> 00:36:27.305 Um, how to succeed, okay. 754 00:36:27.325 --> 00:36:29.585 And, and I think, you know, to, again, um, giving credit 755 00:36:29.605 --> 00:36:32.865 to the service that McKinsey has and I made fun of some 756 00:36:32.865 --> 00:36:34.385 of them before in the hype cycle, 757 00:36:35.145 --> 00:36:36.665 I like the title of this one, right? 758 00:36:36.665 --> 00:36:39.545 Because essentially it's finally this realization that 759 00:36:40.165 --> 00:36:43.785 it is not only about the technology, right? 760 00:36:43.855 --> 00:36:47.785 Organizations have to change themselves if they want 761 00:36:47.785 --> 00:36:51.665 to bring some value, uh, from AI. Um, 762 00:36:52.085 --> 00:36:54.745 for many cases say, hey, let's just drop technology. 763 00:36:54.955 --> 00:36:57.905 Let's see what happens and pray for the best. Right? 764 00:36:58.445 --> 00:36:59.585 That's not possible. 765 00:36:59.655 --> 00:37:03.265 That does not work at all, um, in this scenario, 766 00:37:03.265 --> 00:37:05.505 because we're going to be facing disappointment 767 00:37:05.635 --> 00:37:08.465 after disappointment after disappointment on that one. 768 00:37:09.205 --> 00:37:13.815 So, um, um, one of the things that, uh, we, 769 00:37:13.915 --> 00:37:15.215 we, we see, right? 770 00:37:15.235 --> 00:37:18.575 Is that we need to really change the organization quite 771 00:37:18.695 --> 00:37:21.175 a lot more to actually get the value. 772 00:37:21.315 --> 00:37:22.895 So there's a couple of changes 773 00:37:22.895 --> 00:37:24.975 that might be more pertinent, right? 774 00:37:24.995 --> 00:37:28.255 One is really changing in the strategy, right. 775 00:37:28.635 --> 00:37:32.175 Um, I was, uh, giving a talk to some execs in, uh, 776 00:37:32.175 --> 00:37:33.295 executives in Indy 777 00:37:33.795 --> 00:37:37.895 and, uh, we're like, well, you know, talking about the fear 778 00:37:37.895 --> 00:37:39.135 of missing out, right? 779 00:37:39.315 --> 00:37:41.895 Uh, you know, everybody seems to be investing in AI 780 00:37:41.895 --> 00:37:45.415 because everybody else is doing that, so I need 781 00:37:45.415 --> 00:37:46.575 to do it as well, right? 782 00:37:47.525 --> 00:37:49.855 Well, FOMO is not a strategy, right? 783 00:37:50.475 --> 00:37:51.895 It is really, it really isn't. 784 00:37:52.055 --> 00:37:54.335 I mean, again, it doesn't really mean that you don't want 785 00:37:54.335 --> 00:37:56.335 to experiment with AI, right? 786 00:37:56.435 --> 00:37:58.975 You can't figure out what's the best approach is, 787 00:37:59.435 --> 00:38:02.495 but you still should try to formulate a strategy about 788 00:38:02.545 --> 00:38:03.575 where you are going. 789 00:38:04.435 --> 00:38:08.815 Um, otherwise a lot of that investment is going to be wasted 790 00:38:08.875 --> 00:38:11.375 and people is going to start a little bit more reticent 791 00:38:12.105 --> 00:38:13.655 about using AI in the future. 792 00:38:14.555 --> 00:38:17.615 The other thing that you have to think about is how 793 00:38:17.615 --> 00:38:19.975 to create this dual timeline, right? 794 00:38:20.375 --> 00:38:22.815 A timeline in which, you know, say, hey, 795 00:38:22.815 --> 00:38:25.935 those are my quick gains, my quick wins that I want 796 00:38:25.935 --> 00:38:27.255 to get right now, right? 797 00:38:27.995 --> 00:38:30.375 But what's my imagination, 798 00:38:31.215 --> 00:38:35.405 creative approach to AI that is going to say, hey, 799 00:38:35.545 --> 00:38:38.805 if you are successful in this, this is 800 00:38:38.905 --> 00:38:41.325 how our organization is going to change. 801 00:38:42.145 --> 00:38:45.165 And if you are successful in that, this is 802 00:38:45.165 --> 00:38:48.445 how organization is, go to iteration three, 803 00:38:48.865 --> 00:38:50.645 and this is how our organization going 804 00:38:50.645 --> 00:38:52.285 to iteration four, right? 805 00:38:52.745 --> 00:38:56.685 Is really looking at this realm of the possible, right, 806 00:38:56.705 --> 00:38:58.245 the art of the possible here, 807 00:38:58.515 --> 00:39:02.005 because it's just, you know, just looking at the short wins, 808 00:39:02.105 --> 00:39:04.285 the quick wins, it's not going to move 809 00:39:04.305 --> 00:39:06.245 you, move the needle quite a bit. 810 00:39:06.685 --> 00:39:08.885 I have classes where I have my students like, hey, 811 00:39:09.005 --> 00:39:10.725 I want you to pick one industry. 812 00:39:11.015 --> 00:39:13.605 Think about this industry 10 years from now, right? 813 00:39:14.625 --> 00:39:16.405 Try to come up with something that you see 814 00:39:16.795 --> 00:39:19.005 what happens in this particular industry in 10 years. 815 00:39:19.625 --> 00:39:22.725 Use what you know right now, expect the iteration 816 00:39:22.725 --> 00:39:24.645 that technology is going to come with, right? 817 00:39:24.645 --> 00:39:25.925 That's naturally going to occur. 818 00:39:26.305 --> 00:39:29.245 And how you prepare organization, not for right now, 819 00:39:29.385 --> 00:39:32.605 but for 10 years from now when the world might be changing. 820 00:39:32.785 --> 00:39:34.485 How do you not get left behind? 821 00:39:35.065 --> 00:39:36.805 That's what you need in terms of strategy 822 00:39:36.865 --> 00:39:38.005 for organizations, right? 823 00:39:38.025 --> 00:39:40.645 And of course, also developing business cases 824 00:39:40.745 --> 00:39:42.605 and metrics that can guide people. 825 00:39:43.345 --> 00:39:45.725 It cannot be investment just for the sake investment. 826 00:39:45.945 --> 00:39:50.005 It has to come up, come back with real, real value. 827 00:39:51.415 --> 00:39:54.315 We also need changes in the skill and organizing, right? 828 00:39:54.465 --> 00:39:57.275 What are the true AI skills that employees need? 829 00:39:58.265 --> 00:40:01.635 What can AI do? What can't AI do? 830 00:40:02.455 --> 00:40:03.835 How to set up in those skills? 831 00:40:03.985 --> 00:40:08.155 What are the processes that need to change, right? 832 00:40:08.535 --> 00:40:10.555 So you can train people, right? 833 00:40:10.615 --> 00:40:12.595 We can use, you know, more AI. 834 00:40:12.655 --> 00:40:14.355 You can be talented in using AI, 835 00:40:14.775 --> 00:40:17.075 but if the process in the organization don't change, 836 00:40:17.175 --> 00:40:19.555 if you don't think about the ways that you can perhaps 837 00:40:20.435 --> 00:40:22.875 compress time or perhaps improve accuracy 838 00:40:22.895 --> 00:40:26.395 or perhaps benefit the customer in a way 839 00:40:26.475 --> 00:40:27.955 that is coherent, right, 840 00:40:27.955 --> 00:40:30.955 you're going to always to start these isolated mechanisms 841 00:40:30.955 --> 00:40:32.555 that cannot scale very well. 842 00:40:32.935 --> 00:40:36.555 So thinking about how exactly those process need to change 843 00:40:36.975 --> 00:40:39.075 to reabsorb AI in organization, 844 00:40:39.095 --> 00:40:41.435 and many organizations fail to do that, right? 845 00:40:41.705 --> 00:40:42.955 Like the future of work, 846 00:40:43.815 --> 00:40:45.715 but the future of the nature of work. 847 00:40:45.775 --> 00:40:48.475 What's the nature of work when you start embedding AI, 848 00:40:49.055 --> 00:40:50.795 and a lot of it is Agile, right? 849 00:40:50.945 --> 00:40:53.635 Agility. How to actually keep iterating 850 00:40:53.705 --> 00:40:56.835 through these new approaches, not only the solutions 851 00:40:56.835 --> 00:40:59.275 with AI, but how the processes themselves 852 00:40:59.895 --> 00:41:01.955 can change over time in a way 853 00:41:01.955 --> 00:41:04.195 that can benefit all the stakeholders. 854 00:41:06.095 --> 00:41:08.505 Data management. Again, very briefly, I'm not going 855 00:41:08.505 --> 00:41:09.545 to repeat myself, right? 856 00:41:09.685 --> 00:41:11.745 How do you deal with data? You need 857 00:41:11.745 --> 00:41:14.865 to really focus on data from the very beginning. 858 00:41:15.335 --> 00:41:17.305 Many organizations struggle with AI 859 00:41:17.305 --> 00:41:20.345 because they don't have good data management structures. 860 00:41:20.345 --> 00:41:22.105 They don't have the good data governance. 861 00:41:22.735 --> 00:41:24.985 They try things, it doesn't work. 862 00:41:25.925 --> 00:41:27.145 And then they regret, 863 00:41:27.685 --> 00:41:29.745 and then they say, well, you know, AI is, 864 00:41:29.805 --> 00:41:30.865 is for nothing, right? 865 00:41:30.925 --> 00:41:33.765 Why I'm doing this? And that's not really, I mean, 866 00:41:33.765 --> 00:41:36.405 you essentially, you already start in a position 867 00:41:36.405 --> 00:41:38.725 that unattainable because you not have the 868 00:41:38.725 --> 00:41:39.885 data to actually run this. 869 00:41:40.225 --> 00:41:41.365 So allowing yourself 870 00:41:41.425 --> 00:41:43.085 to create a good data infrastructure 871 00:41:43.265 --> 00:41:44.445 is absolutely fundamental. 872 00:41:46.055 --> 00:41:49.035 And finally, right, again, I want to keep some time for Q&A, 873 00:41:49.035 --> 00:41:52.715 is, is just thinking about governance and risk, 874 00:41:53.135 --> 00:41:54.395 right, risk management. 875 00:41:54.565 --> 00:41:58.195 Bring risk very early in the process 876 00:41:59.215 --> 00:42:03.595 so you can have, um, you know, solutions that are practical 877 00:42:03.775 --> 00:42:06.155 and, you know, minimize the amount of risk 878 00:42:06.185 --> 00:42:07.315 that is involved, right? 879 00:42:07.735 --> 00:42:09.755 So in the world, in technical world, right, 880 00:42:09.755 --> 00:42:12.395 to have sometimes things like DevOps, right? Dev, and 881 00:42:12.535 --> 00:42:15.635 now we're moving much more in DevSecOps, right-- 882 00:42:16.025 --> 00:42:17.155 development, security, 883 00:42:17.215 --> 00:42:20.075 and operations-- is thinking about this 884 00:42:20.175 --> 00:42:21.795 for AI solutions as well. 885 00:42:22.375 --> 00:42:24.515 What's my governance? What are my risks? 886 00:42:25.095 --> 00:42:29.795 Uh, one good thing is that responsible AI, a set of guidelines 887 00:42:29.855 --> 00:42:32.995 and rules, have become much more, um, 888 00:42:33.425 --> 00:42:35.995 much more disseminated than they were in the past. 889 00:42:36.355 --> 00:42:38.235 I teach about AI in business. 890 00:42:38.315 --> 00:42:40.155 I have been teaching AI in business for like, you know, 891 00:42:40.215 --> 00:42:41.275 six, seven years now. 892 00:42:41.695 --> 00:42:44.275 And I remember, you know, a couple of years ago, I say, hey, 893 00:42:45.185 --> 00:42:47.995 talking about your responsible AI initiatives in 894 00:42:47.995 --> 00:42:51.395 organizations, you know, get blanks, blank stares back at 895 00:42:51.395 --> 00:42:52.555 what we're talking about, right? 896 00:42:52.975 --> 00:42:55.995 And now I see, you know, my students start to really see, 897 00:42:56.335 --> 00:42:59.075 oh, well, yeah, we have this approach for responsible AI. 898 00:42:59.075 --> 00:43:00.075 You have this framework, 899 00:43:00.095 --> 00:43:02.075 that's what I use, you adapted from this company, 900 00:43:02.175 --> 00:43:04.715 so on and so forth. That is pretty much we're heading. 901 00:43:04.855 --> 00:43:09.115 So again, risk management governance is something 902 00:43:09.115 --> 00:43:10.595 that going to minimize the risks. 903 00:43:11.135 --> 00:43:14.635 And of course, that actually real value in that, right? 904 00:43:14.635 --> 00:43:18.035 Because customers knowing that you care about them, right? 905 00:43:18.035 --> 00:43:20.635 Stakeholders know that you care about the reputation, 906 00:43:21.005 --> 00:43:23.795 about the implications really, you know, is, 907 00:43:23.815 --> 00:43:27.115 is well-perceived in organizations. In, sorry, 908 00:43:27.455 --> 00:43:28.675 in the marketplace as well. 909 00:43:29.065 --> 00:43:33.725 Okay? So let's go in some questions, uh, in that, 910 00:43:33.985 --> 00:43:36.445 uh, let's see what we have here. 911 00:43:36.705 --> 00:43:40.285 Um, trying to come up with some again 912 00:43:40.285 --> 00:43:42.365 and again, keep that in about like 10 minutes 913 00:43:42.385 --> 00:43:43.805 or so, sorry, you know, 914 00:43:43.825 --> 00:43:46.925 and feel free to reach out to me, um, you know, uh, 915 00:43:46.985 --> 00:43:48.205 in the LinkedIn or so, 916 00:43:48.205 --> 00:43:50.445 and we can continue the conversation there if don't have 917 00:43:50.445 --> 00:43:54.925 the, the change that. Um, so in terms 918 00:43:54.985 --> 00:43:59.005 of the university, uh, the university itself does have, um, 919 00:43:59.355 --> 00:44:03.405 some infrastructure right now to provide for some resources. 920 00:44:03.825 --> 00:44:07.125 IU Innovates is something that happens. 921 00:44:07.385 --> 00:44:10.405 So if you don't have IU Innovates, so, you know, um, 922 00:44:10.405 --> 00:44:12.365 that would be, if you are related to IU, 923 00:44:12.365 --> 00:44:13.445 that's where I would go. 924 00:44:13.625 --> 00:44:16.605 Um, um, in terms of the office at Indiana University. 925 00:44:17.025 --> 00:44:19.925 Um, but essentially looking at, uh, 926 00:44:19.925 --> 00:44:22.165 looking at the ecosystem where you are, right? 927 00:44:22.165 --> 00:44:24.205 If related to university might be something 928 00:44:24.205 --> 00:44:27.685 that you want you to, uh, to address your local university 929 00:44:27.745 --> 00:44:29.005 and figure out, you know, what's 930 00:44:29.005 --> 00:44:30.085 the potential of doing that? 931 00:44:30.535 --> 00:44:33.445 Thank you, Kathleen, appreciate that very much. 932 00:44:33.745 --> 00:44:37.725 Um, we talked a little bit about the large language models 933 00:44:37.795 --> 00:44:39.525 that are ethically sourced data. 934 00:44:40.065 --> 00:44:44.605 Uh, I would check. Um, it's, it's still very controversial, 935 00:44:44.605 --> 00:44:48.405 unfortunately. There's a lot of, uh, uh, companies that [unintelligible], 936 00:44:48.545 --> 00:44:50.805 uh, are using data without permission. 937 00:44:50.905 --> 00:44:55.005 Um, don't even start talking about artists and creative arts 938 00:44:55.065 --> 00:44:58.005 and stuff like that, um, because that's goal. 939 00:44:58.105 --> 00:45:00.205 So, uh, I wish I had a better answer, 940 00:45:00.305 --> 00:45:03.365 but it is probably involves a little bit more research in 941 00:45:03.365 --> 00:45:06.605 trying to figure out that, um, to, 942 00:45:08.135 --> 00:45:11.165 let's see, uh, ba, ba, ba, 943 00:45:11.195 --> 00:45:13.165 contemplate the effect of coaching education. 944 00:45:13.405 --> 00:45:14.525 Artificial intelligence what kind 945 00:45:14.525 --> 00:45:16.765 of labor supply will be required in the field 946 00:45:16.765 --> 00:45:18.805 of financial services and banking. 947 00:45:19.635 --> 00:45:22.485 What kind of new graduate degree programs would be necessary 948 00:45:22.505 --> 00:45:24.325 to produce effective workers in the field 949 00:45:24.325 --> 00:45:26.525 of financial management in corporations? 950 00:45:26.995 --> 00:45:29.045 Awesome. Awesome question, right? 951 00:45:29.505 --> 00:45:32.445 Um, this is, previous questions were great as well, 952 00:45:32.585 --> 00:45:34.885 but this is something that actually we, we, 953 00:45:34.985 --> 00:45:36.605 we have a guest speaking from, uh, 954 00:45:37.075 --> 00:45:39.485 from a financial institution talking about AI at 955 00:45:39.485 --> 00:45:40.605 Kelley, uh, next week. 956 00:45:41.065 --> 00:45:43.285 And what we have seen, right? 957 00:45:44.005 --> 00:45:48.565 A lot of it is, is, um, from a genetic, genetic view, 958 00:45:48.905 --> 00:45:52.245 the way I trying to train my students, I know that, uh, 959 00:45:52.385 --> 00:45:54.205 you know, Jamie and others are here. 960 00:45:54.345 --> 00:45:55.485 Hi, Jamie, nice to see you. 961 00:45:55.945 --> 00:46:00.045 Um, so it's, it's figure out what's the art 962 00:46:00.045 --> 00:46:01.125 of the possible, right? 963 00:46:01.355 --> 00:46:04.325 What exactly can you think about 964 00:46:05.025 --> 00:46:08.125 how these technologies can be applied to 965 00:46:08.835 --> 00:46:10.565 different scenarios, right? 966 00:46:10.665 --> 00:46:15.085 So I push a lot my students to say, alright, this is 967 00:46:15.085 --> 00:46:19.485 what we know about, um, intelligent, you know, uh, 968 00:46:19.535 --> 00:46:21.925 interfaces, for example, just to give a topic, right? 969 00:46:22.355 --> 00:46:25.805 This is what we know about effective AI, right? 970 00:46:26.265 --> 00:46:30.165 How to read emotions, how to read, you know, uh, scenarios. 971 00:46:30.185 --> 00:46:33.565 How to read voice anxiety, how to read, uh, 972 00:46:33.625 --> 00:46:34.965 you know, the AI, right? 973 00:46:35.305 --> 00:46:39.165 How to understand what the potential, uh, implications of, 974 00:46:39.345 --> 00:46:41.845 you know, your eyes moving a particular way. 975 00:46:42.305 --> 00:46:44.165 So that's what we know, right? 976 00:46:44.705 --> 00:46:48.045 And, um, let's think about potential applications on that. 977 00:46:48.185 --> 00:46:50.525 And that's why I push my students a lot to say, 978 00:46:51.065 --> 00:46:52.645 get us a baseline of technology. 979 00:46:52.785 --> 00:46:56.365 You might not know everything about technology. 980 00:46:56.825 --> 00:47:00.125 Um, so I, but understand what the outputs 981 00:47:00.465 --> 00:47:03.845 of the outcomes are, the technology in thinking about 982 00:47:04.025 --> 00:47:05.965 how they can change processes. 983 00:47:06.385 --> 00:47:08.445 And of course, think about the risks as well, right? 984 00:47:08.635 --> 00:47:10.845 Privacy risks and so on and so forth. 985 00:47:11.385 --> 00:47:14.765 Um, and the accuracy and the potential biases, right? 986 00:47:15.185 --> 00:47:16.645 And understand the whole package 987 00:47:17.675 --> 00:47:20.205 because this is what you want to have people 988 00:47:20.205 --> 00:47:21.965 who are innovating the processes 989 00:47:22.425 --> 00:47:26.245 and understanding what the processes can be once you apply 990 00:47:26.245 --> 00:47:28.925 these new technologies in place so 991 00:47:29.005 --> 00:47:32.005 that you can have a better way and perhaps where AI 992 00:47:32.665 --> 00:47:35.485 and humans are helping each other, right? 993 00:47:35.485 --> 00:47:37.365 Humans are a very good issue. 994 00:47:37.365 --> 00:47:39.685 Always going to be better than AI, at least 995 00:47:39.685 --> 00:47:42.325 for a long time in certain tasks. 996 00:47:42.505 --> 00:47:45.245 And AI is going to be very good in certain tasks, right? 997 00:47:45.945 --> 00:47:48.485 How can we design, redesign process there? 998 00:47:48.485 --> 00:47:50.605 So I think that anyone that kind of thinks about 999 00:47:50.605 --> 00:47:53.765 that creative in financial services, is 1000 00:47:53.775 --> 00:47:54.925 about processes, right? 1001 00:47:54.955 --> 00:47:59.085 It's about security, is about, uh, uh, uh, governance. 1002 00:47:59.235 --> 00:48:01.965 It's about auditing, it's about serving customers. 1003 00:48:02.425 --> 00:48:05.765 How can you create ways in which AI is not [unintelligible], right? 1004 00:48:06.115 --> 00:48:07.925 It's not put like, hey, you have AI, 1005 00:48:08.265 --> 00:48:09.685 but actually effective, 1006 00:48:10.195 --> 00:48:12.405 effectively change the process itself. 1007 00:48:13.425 --> 00:48:16.165 Ah, cool, cool, cool. So let me go over here. 1008 00:48:18.435 --> 00:48:22.855 Yes. So there's a great point about is smart employees know 1009 00:48:22.855 --> 00:48:24.535 they have to fact check the AI. 1010 00:48:24.955 --> 00:48:28.665 Um, oh, I love this question 1011 00:48:28.665 --> 00:48:29.905 because it's a question about 1012 00:48:30.005 --> 00:48:32.265 why don't you just do it instead of checking AI. 1013 00:48:32.685 --> 00:48:37.585 So there is specific situations where we use AI, 1014 00:48:38.565 --> 00:48:41.145 um, where even with all the mistakes, right? 1015 00:48:41.265 --> 00:48:43.345 I mean, idea generation is something 1016 00:48:43.545 --> 00:48:45.825 that AI can actually do quite well. 1017 00:48:46.365 --> 00:48:51.345 Um, hey, give me some, you know, um, potential titles for, 1018 00:48:52.125 --> 00:48:53.305 for a talk, right? 1019 00:48:53.845 --> 00:48:55.905 Um, speed up the process. 1020 00:48:56.385 --> 00:48:59.065 Anything that can speed up the process, um, 1021 00:48:59.435 --> 00:49:00.585 might be helpful. 1022 00:49:00.925 --> 00:49:04.025 Uh, for example, putting together a presentation, um, 1023 00:49:04.265 --> 00:49:05.905 I was teaching in my class last week, 1024 00:49:06.245 --> 00:49:07.465 and I ask people together, 1025 00:49:07.525 --> 00:49:09.905 to put together a little presentation for me in 20 minutes. 1026 00:49:10.405 --> 00:49:13.745 And every single group, right, they discuss the ideas before 1027 00:49:14.245 --> 00:49:16.105 and everything, and they say, hey, just get, Jim 1028 00:49:16.105 --> 00:49:17.985 and I should put together the actual presentation 1029 00:49:18.205 --> 00:49:20.065 and then eventually validate the presentation. 1030 00:49:20.445 --> 00:49:21.825 So a lot of those tasks 1031 00:49:22.215 --> 00:49:25.105 that you can evaluate the outcome yourself, 1032 00:49:25.525 --> 00:49:28.945 but can save you some time, those I think are really, 1033 00:49:28.945 --> 00:49:31.265 really good fit for, for AI, right? 1034 00:49:31.665 --> 00:49:34.345 I, the way that I always represent AI to me is like, 1035 00:49:34.345 --> 00:49:37.185 think about having a very smart intern 1036 00:49:37.935 --> 00:49:41.345 that you ask the intern to do something, the intern is going 1037 00:49:41.345 --> 00:49:42.945 to be joyfully do that, 1038 00:49:42.945 --> 00:49:45.625 and present a really good results and that, 1039 00:49:46.365 --> 00:49:47.585 but with two little things. 1040 00:49:47.845 --> 00:49:50.225 One is that you cannot really trust the 1041 00:49:50.225 --> 00:49:51.385 intern to run the show. 1042 00:49:51.765 --> 00:49:53.025 You have to validate that. 1043 00:49:53.525 --> 00:49:57.785 And two, your intern is also a big fan, a big fan 1044 00:49:57.805 --> 00:50:00.705 of Fortnite and might have spent the whole night playing 1045 00:50:00.705 --> 00:50:02.545 Fortnite and made up things from time 1046 00:50:02.545 --> 00:50:04.745 to time in the outcome, in the result, right? 1047 00:50:05.165 --> 00:50:07.785 So as long as you understand that it's a trust 1048 00:50:07.805 --> 00:50:10.745 but verify situation where essentially looking for parts 1049 00:50:10.765 --> 00:50:13.225 of the process that can, you know, shrink, um, 1050 00:50:13.695 --> 00:50:16.145 that is something that have like, you know, something like, 1051 00:50:16.145 --> 00:50:18.665 you know, uh, creating, writing, modifying the writing 1052 00:50:18.885 --> 00:50:20.065 of something that you create 1053 00:50:20.125 --> 00:50:21.465 to become a little bit more elegant. 1054 00:50:22.115 --> 00:50:24.825 Those things that save time, as long as you're able 1055 00:50:24.825 --> 00:50:28.165 to verify the outcomes, um, that's, those are great. 1056 00:50:28.225 --> 00:50:30.085 And I know this because I have my students, 1057 00:50:30.105 --> 00:50:31.565 you know, creating essays for me. 1058 00:50:31.625 --> 00:50:33.285 And from time to time I like, yeah, 1059 00:50:33.285 --> 00:50:36.885 that was totally generated for AI because you never read it. 1060 00:50:37.025 --> 00:50:38.165 Uh, and that is wrong. 1061 00:50:38.865 --> 00:50:42.085 But, so that is something that, uh, you know, saving time, 1062 00:50:42.205 --> 00:50:46.665 I think that's been approach here. Um, to, to, to, to, to 1063 00:50:47.125 --> 00:50:49.865 how are companies getting data from Sysnet not, um, 1064 00:50:50.045 --> 00:50:51.625 so Thomas, that's a great question. 1065 00:50:51.765 --> 00:50:54.185 Um, but I think that the data component, 1066 00:50:54.205 --> 00:50:56.065 the actual data, right? 1067 00:50:56.815 --> 00:51:00.025 This has both is a, there's technical solutions for that, 1068 00:51:00.085 --> 00:51:02.225 but there is always been a problem, right? 1069 00:51:02.335 --> 00:51:05.465 Sometimes have data that were not intended to do anything, 1070 00:51:05.925 --> 00:51:08.065 and they're running some analysis in that data. 1071 00:51:08.725 --> 00:51:13.345 Um, you know, data for, um, marketing that's now used for, 1072 00:51:13.965 --> 00:51:15.705 um, you know, recalls, right? 1073 00:51:16.895 --> 00:51:18.665 Reliability might not be quite the same. 1074 00:51:18.765 --> 00:51:21.625 So understanding data sourcing, understanding data 1075 00:51:21.785 --> 00:51:24.825 [unintelligible] in understanding kind of the mechanisms 1076 00:51:24.825 --> 00:51:28.345 that leads you to modify, uh, you know, to, 1077 00:51:28.365 --> 00:51:30.265 to assess the quality of the data. 1078 00:51:30.595 --> 00:51:33.385 Those are very important. That's a part of data governance. 1079 00:51:33.765 --> 00:51:36.025 So you have struggle with that, right, 1080 00:51:36.165 --> 00:51:39.185 for a while, but there is actually technical solutions 1081 00:51:39.485 --> 00:51:40.825 and management solutions 1082 00:51:40.825 --> 00:51:45.145 to deal with data that is migrated to different purposes 1083 00:51:45.145 --> 00:51:47.785 because we had to face that for quite some time. 1084 00:51:48.685 --> 00:51:51.945 Oh, a FOMO approach, sorry, Paul is a fear of missing out, 1085 00:51:52.165 --> 00:51:55.465 um, which is everybody jumps into AI just 1086 00:51:55.465 --> 00:51:57.585 because everybody else is doing that, and 1087 00:51:57.585 --> 00:51:59.425 because everybody else is doing that 1088 00:51:59.445 --> 00:52:01.905 so I have to do it, uh, without much that. 1089 00:52:02.605 --> 00:52:07.085 Um, so process 1090 00:52:07.545 --> 00:52:10.205 of, uh, um, categorization 1091 00:52:10.205 --> 00:52:13.445 of AI skills from nontechnical skilled workforce group, 1092 00:52:13.465 --> 00:52:15.005 in this case, business operations staff, 1093 00:52:15.005 --> 00:52:16.525 always done things that way. 1094 00:52:18.635 --> 00:52:19.975 That's a tough one, Jamie. 1095 00:52:20.155 --> 00:52:24.655 Um, that might be something we might discuss more in, uh, 1096 00:52:25.115 --> 00:52:27.295 in, uh, perhaps on LinkedIn or something. 1097 00:52:27.835 --> 00:52:29.695 Um, I think that we're, 1098 00:52:29.835 --> 00:52:33.225 but we're not quite there yet. 1099 00:52:33.285 --> 00:52:35.185 I'm, I'm, I see how I answered that. 1100 00:52:35.735 --> 00:52:39.905 That is, that is, I think that was a start to really set up 1101 00:52:40.575 --> 00:52:44.025 some skill sets that are valuable. 1102 00:52:44.045 --> 00:52:45.585 And this is part of the struggle in 1103 00:52:45.585 --> 00:52:46.785 university as well, right? 1104 00:52:47.005 --> 00:52:49.305 We are, we need to train students that are going 1105 00:52:49.305 --> 00:52:51.345 to be in the workforce where they're going 1106 00:52:51.345 --> 00:52:52.985 to be using AI all the time. 1107 00:52:53.965 --> 00:52:57.625 How can I, you know, making sure the students are getting skills 1108 00:52:57.735 --> 00:52:59.625 that are trained to brought to a variety 1109 00:52:59.645 --> 00:53:01.385 of different processes, right? 1110 00:53:02.065 --> 00:53:05.185 I think that is a lot of that is understanding capabilities, 1111 00:53:05.185 --> 00:53:06.785 understanding what is possible, 1112 00:53:06.785 --> 00:53:10.625 understanding, do like a responsible use of AI, understanding 1113 00:53:10.625 --> 00:53:11.905 that, you know, as you was said 1114 00:53:11.905 --> 00:53:14.305 before, sometimes there is, there is risks 1115 00:53:14.565 --> 00:53:16.345 and the outcome is not great. 1116 00:53:16.885 --> 00:53:19.025 So how can we understand, you know, 1117 00:53:19.025 --> 00:53:21.185 that is always should be a human in the loop, 1118 00:53:21.205 --> 00:53:23.345 at least at this stage of development of AI, 1119 00:53:23.805 --> 00:53:26.045 to make decisions about the validity 1120 00:53:26.105 --> 00:53:28.285 of the initial outcome, right, 1121 00:53:28.385 --> 00:53:31.725 in the assessment. So, I mean, I was, uh, talking to someone 1122 00:53:31.825 --> 00:53:32.925 of a faculty here 1123 00:53:32.925 --> 00:53:35.005 before, it's like, hey, if you are an expert already 1124 00:53:35.065 --> 00:53:39.485 and are using AI to minimize your time, that's pretty solid. 1125 00:53:39.485 --> 00:53:40.965 That's something that we actually, you know, 1126 00:53:40.965 --> 00:53:42.165 can do relatively well. 1127 00:53:42.625 --> 00:53:44.885 But if you are someone completely new 1128 00:53:44.945 --> 00:53:48.885 and you have no basis to evaluate the accuracy of outcomes, 1129 00:53:49.465 --> 00:53:51.045 that's a little bit touchy, right? 1130 00:53:51.045 --> 00:53:52.845 That's something that have, so I think that's something 1131 00:53:52.845 --> 00:53:54.045 that you have to evolve 1132 00:53:54.045 --> 00:53:56.325 with you in terms of how we go on that. 1133 00:53:57.985 --> 00:54:00.165 Who is best suited to lead this effort? 1134 00:54:00.505 --> 00:54:03.445 Uh, I, that's a, that's a great question. 1135 00:54:03.545 --> 00:54:05.165 [Kim] That's a great question. [Alex] Yeah. 1136 00:54:05.345 --> 00:54:08.845 Um, and, uh, it is just, I, I think that is, 1137 00:54:09.665 --> 00:54:12.525 so when I teach my students, my students, uh, many 1138 00:54:12.525 --> 00:54:14.205 of my students are MBA students, right? 1139 00:54:14.205 --> 00:54:16.605 They always tell, like, you know, hey, they are the ones. 1140 00:54:16.625 --> 00:54:17.725 So I, dude, 1141 00:54:18.105 --> 00:54:20.805 and I think it's really a collaboration between, um, 1142 00:54:21.345 --> 00:54:25.285 the technical people and the business people, right? 1143 00:54:25.555 --> 00:54:28.685 They have to be aligned. They absolutely have to be aligned. 1144 00:54:28.685 --> 00:54:31.285 This is a big challenge in many technical issues, is 1145 00:54:31.825 --> 00:54:34.965 how do you align people that actually doing both, um, 1146 00:54:35.335 --> 00:54:37.645 understand the business consequences 1147 00:54:38.065 --> 00:54:41.045 and understanding also the technology possibilities, right? 1148 00:54:41.265 --> 00:54:45.085 So creating, uh, structures that are really looking 1149 00:54:45.145 --> 00:54:46.845 for value streams instead 1150 00:54:46.845 --> 00:54:49.325 of just a translation from tech to business. 1151 00:54:49.745 --> 00:54:52.805 Um, this is always my recommendation in terms of, you know, 1152 00:54:52.805 --> 00:54:56.045 making sure that we're a able to get the value. To, to, to, 1153 00:54:56.045 --> 00:55:00.645 to, alright, so I'm going to pass, um, 1154 00:55:00.645 --> 00:55:03.725 because I just wanna make sure that I give the time to, um, 1155 00:55:04.885 --> 00:55:06.465 to Kim, uh, that, uh, 1156 00:55:06.565 --> 00:55:08.465 and by the way, if anyone wants to reach out 1157 00:55:08.465 --> 00:55:10.705 to me in LinkedIn or, you know, uh, via email, 1158 00:55:10.925 --> 00:55:12.225 et cetera, I love to discuss. 1159 00:55:12.225 --> 00:55:13.505 Those are really great questions. 1160 00:55:13.555 --> 00:55:17.185 Thank you so much, uh, really for, uh, for sharing those. 1161 00:55:17.205 --> 00:55:20.145 But I'm going to pass you, Kim, so that, uh, oh, 1162 00:55:20.145 --> 00:55:21.545 I forgot to actually move this slide. 1163 00:55:21.565 --> 00:55:23.425 The Q&A, we're in the Q&A, sorry for that. 1164 00:55:23.765 --> 00:55:26.025 Um, so Kim, I'll, I'll let you take over. 1165 00:55:26.045 --> 00:55:28.905 And again, please reach out to me, in LinkedIn or email 1166 00:55:29.125 --> 00:55:30.705 and we can continue the conversation. 1167 00:55:32.765 --> 00:55:36.665 [Kim] Thanks Alex. So, um, 1168 00:55:37.105 --> 00:55:39.905 I think next slide we have a little bit of information on. 1169 00:55:40.015 --> 00:55:44.105 Okay, so today's session, um, if you enjoyed Alex, 1170 00:55:44.115 --> 00:55:45.265 which I'm sure you all did, 1171 00:55:45.265 --> 00:55:46.265 and you'd like to learn more, 1172 00:55:46.445 --> 00:55:49.145 we have a really wonderful professional development course 1173 00:55:49.545 --> 00:55:51.665 offering starting on September 2nd. 1174 00:55:52.285 --> 00:55:55.905 It runs September 2nd through October 7th. 1175 00:55:56.565 --> 00:55:58.505 And the course title is Rise of AI. 1176 00:55:58.525 --> 00:55:59.785 And Alex, I'll let you jump in 1177 00:55:59.785 --> 00:56:01.705 and provide a little bit more detail around it, 1178 00:56:01.805 --> 00:56:03.945 but it's, it's a hundred percent online. 1179 00:56:04.405 --> 00:56:06.585 Um, we have live online sessions, 1180 00:56:06.765 --> 00:56:08.505 the duration's only about six weeks, 1181 00:56:08.565 --> 00:56:11.305 so a really great course for, for you if you're just trying 1182 00:56:11.305 --> 00:56:14.465 to learn a little bit more and, um, get like a certificate 1183 00:56:14.525 --> 00:56:15.745 or whatever you're looking for. 1184 00:56:16.045 --> 00:56:18.225 Um, Alex, do you wanna talk a little bit about the, 1185 00:56:18.565 --> 00:56:20.585 the content and what there's, it's you 1186 00:56:20.585 --> 00:56:23.925 and two other professors, so [Alex] yes [Kim] a great, uh, cohort 1187 00:56:24.145 --> 00:56:26.125 of faculty presenting on this topic. 1188 00:56:27.085 --> 00:56:28.655 [Alex] Yeah, sure thing. Thank you, Kim. 1189 00:56:29.035 --> 00:56:31.015 Um, so there are three of us. It's me, 1190 00:56:31.155 --> 00:56:33.815 I'm going to handle the AI for organizations, uh, 1191 00:56:33.835 --> 00:56:35.975 before, you know, uh, kind of, uh, you know, 1192 00:56:36.165 --> 00:56:38.415 extending much more about what we discussed here, 1193 00:56:38.795 --> 00:56:41.215 but with more details and more use cases. 1194 00:56:41.595 --> 00:56:45.215 Um, the first part is actually done, um, by Sagar Samtani. 1195 00:56:45.635 --> 00:56:49.895 Sagar is a bright, uh, very, the top AI researchers, uh, 1196 00:56:50.035 --> 00:56:52.335 uh, you know, that we, we see around in the US. 1197 00:56:53.105 --> 00:56:54.535 Sagar was also the director. 1198 00:56:54.675 --> 00:56:56.455 He took a leave of absence for one year 1199 00:56:56.455 --> 00:57:00.255 to become the director of AI for TSMC, so 1200 00:57:00.835 --> 00:57:03.455 the big Taiwanese, uh, uh, ship, uh, 1201 00:57:03.525 --> 00:57:05.215 chip manufacturing company, right? 1202 00:57:05.435 --> 00:57:08.775 So, um, um, he was there in industry, 1203 00:57:08.955 --> 00:57:10.095 uh, talking about that. 1204 00:57:10.195 --> 00:57:12.175 So a lot of, uh, a lot of great content. 1205 00:57:12.355 --> 00:57:15.175 And the third, uh, faculty going to be talking about the AI 1206 00:57:15.195 --> 00:57:17.535 for individuals is Alan Dennis. 1207 00:57:17.715 --> 00:57:19.735 And Alan is a researcher, has been working 1208 00:57:19.735 --> 00:57:23.375 with AI research from an individual-use perspective 1209 00:57:23.915 --> 00:57:25.335 for eight years or so. 1210 00:57:25.755 --> 00:57:28.695 Um, he has some really great stuff in things like, you know, 1211 00:57:28.695 --> 00:57:30.775 digital humans, um, you know, 1212 00:57:30.795 --> 00:57:33.415 and he's actually going to be doing some cool demos about, 1213 00:57:33.515 --> 00:57:35.135 uh, that area as well, 1214 00:57:35.195 --> 00:57:37.375 but also he's going to be talking about things like a prompt 1215 00:57:37.655 --> 00:57:38.895 engineering, right? 1216 00:57:38.895 --> 00:57:41.375 Things that are very practical, uh, as well. 1217 00:57:41.455 --> 00:57:43.335 I mean, I think everything is going to be, try 1218 00:57:43.335 --> 00:57:44.335 to be super practical, 1219 00:57:44.635 --> 00:57:47.135 but that's the definition of those, those course. 1220 00:57:47.155 --> 00:57:49.975 And I hope to see many of you there, Kim, back to you. 1221 00:57:50.305 --> 00:57:53.455 [Kim] Great, great. And we have a QR code here, um, on 1222 00:57:54.135 --> 00:57:55.215 previous slide, so feel free. 1223 00:57:55.355 --> 00:57:57.015 Um, we're gonna, we're recording the session. 1224 00:57:57.145 --> 00:57:58.255 We'll send out the recording. 1225 00:57:58.255 --> 00:58:01.255 So if you wanna scan the QR code for more information, 1226 00:58:01.275 --> 00:58:04.415 you'll, you'll see all of the course details there, uh, 1227 00:58:04.415 --> 00:58:07.295 pricing, dates, registration information, so, 1228 00:58:07.475 --> 00:58:08.695 um, you'll be all set there. 1229 00:58:10.755 --> 00:58:13.015 [Alex] So very quickly here before, back to Kim. 1230 00:58:13.235 --> 00:58:15.375 Um, so we also have new, uh, 1231 00:58:15.555 --> 00:58:18.135 online MS programs, uh, for both. 1232 00:58:18.445 --> 00:58:20.815 There's a lot of content about AI, so 1233 00:58:21.875 --> 00:58:24.455 that's when I have a whole course on that. 1234 00:58:24.835 --> 00:58:27.615 [Kim] Mm-hmm. [Alex] Um, uh, we also have Sagar teaching the program 1235 00:58:27.715 --> 00:58:30.855 as well, talking about, uh, generative AI and deep learning. 1236 00:58:31.435 --> 00:58:35.335 So, uh, both, uh, both the MS in IT Management 1237 00:58:35.435 --> 00:58:38.335 and the MS in Business Analytics, the QR codes are there. 1238 00:58:38.715 --> 00:58:41.215 So if you, you know, um, of course is, is, 1239 00:58:41.275 --> 00:58:43.135 is a deeper dive of course. 1240 00:58:43.395 --> 00:58:44.975 Um, but there's a lot 1241 00:58:44.975 --> 00:58:48.735 of content in about AI in those two courses from AI 1242 00:58:48.735 --> 00:58:51.655 for business, machine learning, with a lot of, uh, 1243 00:58:51.655 --> 00:58:54.575 practical experimentation, deep learning, right? 1244 00:58:54.575 --> 00:58:58.015 GenAI, actually, you, I was talking to Sagar last week, 1245 00:58:58.155 --> 00:59:00.055 and he plans to have projects 1246 00:59:00.055 --> 00:59:02.615 where people can actually create solutions in the 1247 00:59:02.615 --> 00:59:03.775 class, right? 1248 00:59:03.875 --> 00:59:06.455 To, to come up with some uses of large language models. 1249 00:59:06.675 --> 00:59:09.975 So really, really good stuff. And again, QR code available. 1250 00:59:10.195 --> 00:59:12.655 Uh, but if you are interested, just let us know 1251 00:59:12.835 --> 00:59:16.095 and happy to direct you to the people who can, you know, 1252 00:59:16.095 --> 00:59:17.255 give you even more information. 1253 00:59:17.375 --> 00:59:19.495 I can give information, but if you want more about the 1254 00:59:19.495 --> 00:59:22.285 process, et cetera, uh, Kim, back to you again. 1255 00:59:23.165 --> 00:59:24.775 [Kim] Wonderful. Thank you. 1256 00:59:25.245 --> 00:59:26.655 Okay, so thanks Alex, 1257 00:59:26.875 --> 00:59:28.375 and thank you all for joining us today. 1258 00:59:28.445 --> 00:59:30.775 Just before we close, I wanna provide just a little bit 1259 00:59:30.775 --> 00:59:33.895 of information about Kelley Executive Education in cor-, in case 1260 00:59:33.895 --> 00:59:35.495 you aren't familiar with our unit. 1261 00:59:35.995 --> 00:59:38.415 Um, our programs are offered in a 1262 00:59:38.415 --> 00:59:39.535 variety of different formats. 1263 00:59:39.655 --> 00:59:42.695 So online, in person, in Indianapolis, um, 1264 00:59:42.725 --> 00:59:44.455 also in Bloomington, hybrid. 1265 00:59:44.835 --> 00:59:47.095 Within our professional development portfolio, 1266 00:59:47.355 --> 00:59:49.655 we have courses on topics such as leadership 1267 00:59:49.655 --> 00:59:52.655 and management; AI and tech, which is rapidly growing, 1268 00:59:52.755 --> 00:59:55.535 so keep an eye on out for, um, new programs in that field, 1269 00:59:55.535 --> 00:59:59.655 that, that topic; operations; finance; um, 1270 00:59:59.675 --> 01:00:01.335 and number of different offerings. 1271 01:00:01.635 --> 01:00:04.735 Um, if you're looking for more on AI, I know some 1272 01:00:04.735 --> 01:00:05.855 of you commented, um, 1273 01:00:06.075 --> 01:00:08.935 do you provide certifications on topics around AI? 1274 01:00:09.195 --> 01:00:11.495 We, we have Rise of AI, September 2nd with Alex, 1275 01:00:11.595 --> 01:00:14.495 so we mentioned. Um, AI, AI Applications 1276 01:00:14.495 --> 01:00:16.695 in Marketing launching August 5th. 1277 01:00:17.155 --> 01:00:21.375 And then we also have a several other courses launching in 1278 01:00:21.375 --> 01:00:22.855 the fall that we'll be announcing soon. 1279 01:00:22.915 --> 01:00:24.135 So keep an eye out for those. 1280 01:00:24.715 --> 01:00:27.095 If you have any questions at all, um, feel free 1281 01:00:27.095 --> 01:00:28.175 to email me anytime. 1282 01:00:28.995 --> 01:00:30.175 Um, on the next slide, 1283 01:00:30.235 --> 01:00:32.335 you'll see there's a professional development, 1284 01:00:32.835 --> 01:00:34.175 uh, email that we have. 1285 01:00:34.435 --> 01:00:37.975 The, our, our advisors will follow up with you immediately, 1286 01:00:38.115 --> 01:00:39.655 or you can also shoot me an email. 1287 01:00:39.795 --> 01:00:42.695 I'm always happy to set up a call, um, talk you 1288 01:00:42.695 --> 01:00:44.055 through any questions that you have. 1289 01:00:44.635 --> 01:00:47.455 If you are an IU alumni joining us today, 1290 01:00:47.455 --> 01:00:51.575 which we hope you are, um, you can receive 15% off of any 1291 01:00:51.575 --> 01:00:53.375 of our professional development course offerings. 1292 01:00:53.515 --> 01:00:57.135 So you can use that code "SAVE15" for any 1293 01:00:57.135 --> 01:00:58.295 of the courses that we offer. 1294 01:00:58.395 --> 01:00:59.535 So just keep that in mind. 1295 01:00:59.555 --> 01:01:02.375 That's a nice little perk, um, for alumni and staff. 1296 01:01:03.315 --> 01:01:04.695 And you see the, 1297 01:01:04.755 --> 01:01:07.495 the QR code here will go directly to our websites. 1298 01:01:07.495 --> 01:01:09.455 You can learn more about all of our program offerings. 1299 01:01:09.915 --> 01:01:11.735 Um, but like I said, if you have any questions at all, 1300 01:01:11.845 --> 01:01:13.695 feel free to shoot me an email anytime. 1301 01:01:13.955 --> 01:01:16.175 Um, Alex as well, we're always happy to talk to you, 1302 01:01:16.635 --> 01:01:18.055 and thank you guys for joining us today. 1303 01:01:19.235 --> 01:01:20.665 [Alex] Thank you so much, everybody. 1304 01:01:20.805 --> 01:01:21.985 Uh, please keep in touch 1305 01:01:22.165 --> 01:01:24.545 and, uh, let's keep the conversation going.

    Meet your instructor

    Alex Lopes portrait

    Alex Barsi Lopes

    Alex Barsi Lopes, PhD, is a clinical professor of information systems and Grant Thornton Scholar at the Kelley School of Business at Indiana University Bloomington. He currently serves as associate chair for Kelley Executive Education Programs, in charge of Online MS programs and certificates. He has taught courses on digital transformation, management of the IT Function, business process modeling and systems development, low-code development, agile development and organizational agility, design thinking, robotics process automation, intelligent automation and artificial intelligence applications, and databases/big data technologies.

    Alex created the Technology Consulting Workshop, served as its director from 2016 to 2020, and led student consulting engagements in Guatemala, India, and Thailand. His research focuses on online information goods, collaboration technologies, face-to-face and online social networks, and IS educational initiatives. His research appears in journals including Information Systems Research, Journal of Management Information Systems, and Communications of the ACM. Passionate about international business education, Alex has taken students to Brazil, Canada, China, Guatemala, India, Korea, Mexico, and Thailand, in addition to developing a speaker series about Ghana and Western Africa. Before earning his PhD from the University of Pittsburgh, Alex occupied consulting and corporate positions. Prior to joining Kelley, he was the director of the MS IS Program at the University of Cincinnati.

    Sagar Samtani portrait

    Sagar Samtani

    Sagar Samtani, PhD, is an assistant professor and Grant Thornton Scholar of Information Systems at the Kelley School of Business at Indiana University. Sagar’s research aims to develop AI-enabled analytics based in deep learning, network science, and text mining approaches for dark web analytics; vulnerability assessment for advanced cyberinfrastructure; open-source software security; cyber threat intelligence; AI risk management; and mental health applications. Sagar has published over 60 journal, conference, and workshop papers on these topics in leading information systems, cybersecurity, and machine learning venues. He has received over $4M (in PI and Co-PI roles) in funding from the NSF SaTC, CICI, SFS, and CRII programs, as well as private sources. He holds leadership positions in leading industry entities, including spots on the CompTIA ISAO Executive Advisory Council and the DEFCON AI Village Board of Directors.

    Sagar has won several awards for his research, including the IU Outstanding Junior Faculty Award in 2023, Kelley School of Business Early Career Research Impact Award in 2023, induction into the NSF/CISA CyberCorps SFS Hall of Fame in 2022, the AIS Early Career Award in 2022, the ACM SIGMIS (ICIS) Doctoral Dissertation award in 2019, and several best paper awards. He has also won numerous teaching awards and distinctions for his courses on AI for cybersecurity, CTI, and business analytics, including the IU Trustees' Teaching Award and being named as a Top 50 Undergraduate Professor by Poets&Quants in 2022, among others. Sagar has been cited in the Associated Press, Forbes, Miami Herald, Fox, Science Magazine, AAAS, The Penny Hoarder, and other venues. He is a member of INFORMS, AIS, ACM, IEEE, and INNS.

    Alan Dennis portrait

    Alan R. Dennis

    Alan R. Dennis is a professor of information systems and holds the John T. Chambers Chair of Internet Systems at the Kelley School of Business at Indiana University. He was named a Fellow of the Association for Information Systems in 2012 and received the LEO Award in 2021. His research focuses on artificial intelligence and cybersecurity.

    Alan is ranked in the top five most published information systems researchers over the last 30 years, and a recent Stanford study placed him in the top 1% most influential researchers in the world across all scientific disciplines. His research has been reported in the popular press almost 1000 times, including Wall Street Journal, Forbes, USA Today, CBS, PBS, Fox, CBC, and CTV. He is a past president of the Association for Information Systems.

    Questions?

    Please reach out to us at kelleypd@iu.edu to learn more about the AI Strategy for Executives short course and other courses offered by Kelley School of Business Executive Education.

    Explore all courses in the Leading with AI series

    Social media

    • Linkedin for the Kelley School of Business Executive Education Program
    • Blog for the Kelley School of Business Executive Education Program
    • Accessibility
    • College Scorecard
    • Privacy Notice
    • Copyright © 2026 The Trustees of Indiana University