In the eighth episode of Speaking of AI, LXT’s Phil Hall chats with Joti Balani, Founder of Freshriver.ai. Joti began her career as a software engineer before moving into the corporate world as a conversational AI consultant for some of the biggest companies on the planet. Now, as the founder and acting director of Freshriver.ai, she is leading the charge to find balance at the intersection of technology and society. Tune in to hear her insights and perspectives on emotional and ethical AI, AI regulations, gender bias in AI, and the future of generative AI.

Introducing the founder and managing director of Freshriver.ai, Joti Balani

PHIL:

My guest today began her career as a software engineer before moving through product management, product development, marketing, and consulting with some of the biggest organizations on the planet on their strategies for the implementation of conversational AI.  These include Google, Citigroup, Johnson & Johnson, and American Express. She’s the co-founder of  Women in Voice New Jersey. She’s a member of the All Ladies League. And within G100, a global organization of women leaders whose purpose is the creation of massive opportunities for women worldwide, she is the USA country chair for robotics and automation.

Please welcome the founder and managing director of Freshriver.ai, Joti Balani. Hi, Joti.

JOTI:

Hi Phil, thank you. Thank you for having me.

What does providing AI with emotional intelligence mean for your clients?

PHIL:

It’s great to have you here today. Joti, your company, Freshriver.ai, provides AI with emotional intelligence. What does that mean for your clients and in turn for their clients? How far removed is AI with emotional intelligence from what’s happening in mainstream contemporary AI?

JOTI:

That’s a great, great opening question. So, you know, I began this journey into conversational AI, I five years ago, after exiting a 22-year corporate career in helping build the world’s largest wireless and wireline networks. When I exited and I started this journey, I started to see, you know, where AI was beginning to come in, you know, as you know, AI was coined, created, thought of in 1950s and it’s gone through its winters and it’s come in. It’s now here and it’s here to stay.

And I’ve seen the journey from the front lines actually working with a deploying it for some of the large organizations that you mentioned earlier. Part of what I’m seeing is an evolution where it’s getting better and better, but we still have ways to go. The emotional intelligence part came when I wrote down the manifest or the mission for Freshriver.ai when I first began, is to say AI cannot stand on its own as us humans. We need to take a look at the emotional, ethical, and economic intelligence if we are to have these machines actually do good for humanity. So it was that thought process alongside the fact that if you’re going to have emotional intelligence in these machines, it’s got to be created by humans that have that as their end goal while we still make money. I’m a capitalist, I tell folks, I’m a capitalist through and through, but it cannot be at the cost of emotional and ethical intelligence. It’s really that balance is what I created as the formula and it continues today as the foundation.

And as you can see in the media, as you pointed out, a lot of the fallouts that are coming from this remarkable technology, which by the way, I’m now dividing up into two parts. There’s a pre-GPT era and now a current GPT era, which is what began with the floodgates that opened November 2022, which OpenAI did, right? So, what we’re seeing is fallouts with lawsuits, largely coming from the hallucinations of these models, for example, is a lack of thinking and thought and consideration on the parts of the developers and these large companies, the technology companies in the emotional intelligence is how do you think about the humans that you’re going to serve, right? So it takes humans to make these machines emotionally intelligent in order to serve, humanity, if that makes sense as the framework.

How is mainstream, contemporary AI handling emotional and ethical intelligence?

PHIL:

Yeah, how do you think the mainstream is doing on that? Are we there?  Are they doing the right things?

JOTI:

So, I think this is a journey of a thousand miles and we’ve taken the first step. And the reason that is, and if you look at history, you know, as much as I’ve been deep in the space for the last five years, I’ve also studied anthropology in equal measure, because we’ve seen this happen before with the advent of as early as, you know the invention of electricity, for example, right?

Which took a lot of learnings on, you know, somebody must have put their finger in, you know, a socket and said, oh, we need fuses now, right? To just to use that as an analogy. When you take the first step and you say, wow, this is a remarkable tool. And I know, you for your audience purposes, we need to look at AI as the tooling, not a hammer looking for a nail, which is a mindset issue, right?

So, if you begin with thinking about that, a lot is going to happen because every time human beings find a new tool, they just get at it, right? It’s like, let’s go make some money. It’s the economic intelligence that first kicks in. After which when there’s these major fallouts, and by the way, you know, generative AI is an unprecedented, I’m going to call it technology in this case, or a tool that human beings have not had before. If you think about the web, you think about mobile, you think about cloud, you know, the last 25 years, if I were to frame that from a digital transformation standpoint, you know, it’s all been deterministic software. You know, as a software engineer, I clearly understood that, is that when you do software development, you know that the machines will do exactly what you want it to do.

We are now in the era with these generative pre-trained models where they’re appearing to think for themselves, but they’re not, right? This is still the data that’s being used to train these deep learning models, which by the way, scientists themselves don’t really understand how they are working with it, which is the other risk part that comes in when you don’t understand something. But the bigger news, for everybody, this is what we share with our clients, everybody from the C -suite down to developers, to designers, to product managers, look, this is non-deterministic technology. So, you have to think about it in a very, very different way. So, what we’re seeing happen is people are using it, they’re plugging it, and all sorts of issues are now falling out because they were never considered.

Going back to the framework of the emotional and ethical intelligence, it needs to be done what we believe in a crawl, walk, run. You can’t just plug it and go all out, which is what’s happening right now, right? Everybody’s got access to this tech and unfortunately, so do the bad actors. We just saw a report come out from Microsoft and Open AI where they see hacking activity from, you know, nation state like Russia and China coming in. And that’s a double-edged sword. If you actually look at it, it’s like, oh, wow, that’s a bad thing, But, oh wow! You’re actually monitoring usage, which is now a privacy issue. So, there’s this, all sorts of things are falling out. I’m still an optimist. I do believe they will work themselves out. But the political, social, business institutions, regulatory institutions, and legal institutions have to step up faster. I don’t think those frameworks are moving fast enough. Right now it’s the wild, wild west. So, what you’re seeing really is the wild west emerging first.

PHIL:

Yeah, and I guess that’s not surprising. As you said, that’s human nature that you see this, you see the gold fields opening up and you grab a shovel and go.

JOTI:

That’s right. It’s a FOMO effect as well, right? Everybody wants to be the very first that does it. And, you know, there’s a downside to being the first you could, you could fall, fall over yourself in a very big way. So we advise our clients not to put their brands at risk. What we tell them is you it’s very hard for you to recover once you put something out there. So, you, you know, look at the examples, Air Canada, got fined because their chatbot responded with hallucinatory policies that did not exist, right?

So, you know, this is where it gets really real, is when businesses and enterprises and governments, by the way, start to play with this tech, put it out there, but the 20% of the time that it will not work, it will hallucinate can really cause some really, really big issues. So that’s what we’re worried about on behalf of our clients, right Making sure we’re taking into account the risk management. We have a risk management first framework that says, look, just like the medical, the healthcare, which says do no harm. Those are the first philosophies that we work with. It’s, we make sure whatever we do, whether we’re experimenting with a pilot or going all out with, you know, from a product launch and a roadmap is everybody tests for do no harm.

Data collection and data training in the pre-GPT era vs GPT era

PHIL:

That’s great, thank you. You have said in the past, I’m not sure how far in the past, but you’ve said “Facebook is littered with 100,000 bots that have mostly failed.” Now, the first part of my question is how and why did they fail? And the second part of the question is given the rapid and accelerating rate of change in AI, is this still true and how long will it remain true?

JOTI:

So, I want to go back to the context that I said earlier, right? There was a pre-GPT era and then there’s a current GPT era. I actually had that comment, I made that comment in the first two years that I was in this space. This was the pre-GPT era, right? These language models, these large language models, deep learning, machine learning has been around for a while, right? The problem has been the access to data and content to train them. So, in the pre-GPT era, by the way, there were 2000 vendors in this space, you know, and I have, with my own hands worked with a lot of these technologies early when I was beginning into this, into this field. And I would see that same, I would observe the same thing, right? They’re not working very well. They seem like they were dumb. And if you think about a lot of the virtual assistants that are out there, right? You look at Siri, you look at Alexa, they are not very smart in today’s terms, right?

But you look at these GPT models, the reason for that is it all depended on how much data was available.  And most of those models depended on two things. One, on the clients, if it’s an enterprise deployment or if Facebook was putting it on, they needed to be able to use their user data to go train those models, right? And that was not quite available yet, because these generative pre-trained models really are gas guzzlers, right? If you think about Sam Altman saying, hey, we’re going to need $7 trillion. He’s actually not too wrong because it takes a lot to get access to that data as well as train it and alongside the compute and storage costs, right?

So, in the early days as Facebook being obviously Facebook, you know, early adopters took the tech in the pre -GPT era and put it out there. It didn’t get too far. And they also said to, you know, businesses, hey, look, we have this tool and you can go put it on messenger so that you can work with it with your clients. Well, didn’t work. And so those bots didn’t get used because they were not productive or efficient for people to, for example, go order from the small business. So that was at that point in time. And that is true in the pre -GPT era, the tech was not that great, but what happened over the last five years. Every time we’ve worked in that course, you know, our teams expanded, we’re working with more of these technologies. We saw a change in how good these bots were getting just from the data that was being fed. But the second part of the issue we had first was not enough data.

Second is requiring too many humans because you got to counter the fact that these things don’t work really well. You got to have to hire literally thousands of human beings. If you’re somebody like an Amazon for Alexa, to keep them running and they’re still not very smart. So if you think about the ROI of saving costs upfront when you’re deploying these technologies, it was getting washed away because, the cost was being transferred and hiring human beings, right? So you say, well, how is this going to work? Now enter the whole GPT era.

So, in 2017, there were papers being written about these generative pre-trained models, right? And it takes time to train on the wide swath of what’s available on the web. So think about Reddit posts or the sub-Reddit categories underneath them, right? Entire encyclopedias, dare I say New York Times content, right, which now there’s a lawsuit. But to sweep all the content and data that humanity has created, good, bad, or ugly, took that much time, so, in parallel, there were these models being trained on everything under the sun. And when they got released in November 2022 with a conversational AI interface of chat, you know, a hundred million users later in a month. Now there’s data being generated by more human beings, right? So now we’re off to a start where it looks remarkable.

I’m sure you’ve tried chat GPT or entropic or any one of these, It’s just exploding. People’s minds are blown. But there’s a reason for that. It’s because of the wide swath of data content that it was fed. Great. We’ve made movement as humanity from a technology standpoint. Now you ask me that question, you say, well, where are we right now? Wow, these things are really, really good, but they hallucinate. With such confidence, they will tell you facts that are completely wrong, right? Um, I think it was a Gemini Pro that was just released that it was asked the question: give me, you know, images of senators from the 1800s. It responds, Sure!

Here are the diverse senators in 1800s. And it’s got four senators that are, you know, not white that are on there, which is not true, right? We never had that kind. So when people look at that and you go, well, could you put that, you know, the genie back in the bottle, well you can’t, and you shouldn’t, because there are such remarkable, problems that we can solve.

So, you know, the pandemic’s a great example. The COVID vaccine came to us as fast as it did. And I’m not going to go into, you know, all the health issues controversy right now, but it came to be because of this technology, they were able to find, right, the right protein structure in order to create the vaccine. So we, human beings, have to be cognizant and creative and innovative in finding what kind of problems you want to solve. Don’t go throwing this tech at everything. It’s not meant to solve everything. There are cases where you do need humans handling certain things, but where is the collaboration going to be? That’s really the question. And so you will see a lot of failures. You will see remarkable, spectacular failures coming out, but the march towards innovation is not going to stop. And hopefully we can solve some real problems that humanity been facing that we haven’t been able to tackle so far.

Through this rapid rate of change to generative AI technology, is there any aspect of it that surprised you?

PHIL:

Great. Now we’ve already talked about this rapid rate of change. Through this rapid rate of change, is there any aspect of it that has taken you by surprise? Something you really didn’t see coming?

JOTI:

I think it was how quickly the government jumped in where now the FTC, for example, is starting to move very quickly and starting to put things into place. For example, the deep fake videos, right. And the other thing that surprised me is how quickly other countries jumped on the bandwagon and released their models, which means everybody’s been working on it for this long. And it’s now began this huge war and how quickly that happened.

Have you ever seen the government respond so fast to something in the past? If you think about web, think about mobile, right? It took a while, right? We’re 25 years into the web, we’re 15 years into mobile, 10 years on the cloud journey. You know, everybody had their time to figure out how to go. Right now I’m shocked. The question is, will it move fast enough to stop some of the damage that can be done, while still supporting record. If they release, and I think there’s some laws that are being invoked that could potentially just kill innovation itself. Like for example, tech companies being responsible if a user uses their technology to do something that causes harm. It’s similar to the self-driving car question is when that unfortunate death happens who’s responsible? So, I think we have a lot of gray areas to think about. I just hope there’s not knee jerk reactions. And of course you’ve got all the lobbyists, right? That the tech vendors have seats at the table there.

So, their reactions surprised me, but what’s not surprising me is the way they’re going about it is, for example, the White House got only a portion of the tech leaders and the generative AI piece to come in. I don’t know whether we will get enough. And in contrast, you’ve got countries like India that are saying, we’re not going to regulate gen AI.

Right, and then you’ve got Europe, that’s, you know, the first mover to actually do something about it. So it’s going to, it’s going to find its way there. That’s the, uh, it’s like every day I wake up, I tell people it’s like a soap opera, the next episodes already drop every time you wake up, you know, in your time zone, you wake up, you say, oh my God, let me see what’s happening. The speed at which the governments are responding is, is actually spectacular, I have to say.

PHIL:

Yeah, yeah, you raise some good points there. And it is interesting reconciling the US government, the EU, both moving very fast, not that it appears that they’re not landing in the same place. And it’s quite easy to contrast those two. But when you bring Russia, China, India into the picture, the range of possibilities in the contrast really broadens.

And, you know, there’s definitely, and of course, each of them will have an effect on the other. So I think the, the US’s ideal policies or the EU’s ideal policies will not exist in a vacuum, they will be, their center will be shifted by, by what happens in Russia, China, India other, other large powers.

JOTI:

Absolutely. No one wants to get left behind and nobody wants another country to have the leg up. And also defense, right? I feel if the US doesn’t move faster, we’re sitting ducks. The fact that combining generative AI with the quantum compute capabilities, which 11 countries already have access to, our elections, our infrastructure for the country. You know, you can see all of the, you know, what happened with MGM and Caesar bringing those networks to their knees is quite scary.

And I do think that us as a country, we do need to think about the U.S. as a country first. I think we need to look at it. And I’m not, you know, I’m not going to talk about nationalistic agendas and stuff. I don’t want to get into the politics of it. But there is an impact, to making decisions on this technology combined with technologies like quantum compute, our hands are gonna be forced, but with bickering in DC and with the elections coming up this year, I just feel like  we’re losing ground and we need to move faster. So that’s what worries me the most.

PHIL:

Fair enough too. I think it’s worrying if you’re not worrying about it then you’re not informed.

JOTI:

Correct, you don’t know what you don’t know. That actually, Phil, is the biggest problem. Most people who have not been in this space don’t know what they don’t know. And so there is this sense of fear that they don’t understand. And so you know what happens when people make decisions with something they don’t understand? It’s like, what are you doing?

As we celebrate Women’s History Month, who are some women who have inspired you?

PHIL:

Yeah, yeah, exactly. So Joti, March is Women’s History Month and if everything goes to plan, we’ll be releasing this interview on March 8th for International Women’s Day. Who are some of the women that have been particularly inspiring to you?

JOTI:

I love that question. So, you know, I look as far back in history, Ada Lovelace, right, who effectively was the mother of computers, right? Although she may not be getting as much credit for it, all the way to folks like Timnit Gebru, who was courageous enough to stand up when she saw things not working too well at Google when it came to ethics. And you know, she started her nonprofit called DAIR to say, you know, at least what warms my heart is to understand that there are women out there that are finding a way to do the right things. It’s not, you know, saying it’s a woman-led initiative, for the sake of it.

But women do think, you know, women are wired differently than men, it’s just a human, that’s how nature created, you know, beings and all the genders in between, right? To take the best of those thinkers and leaders and creators and innovators, I draw inspiration from all of them. And it’s not just folks in the technology sector, but, you know, folks who’ve done human rights work, because as you know, there is a huge impact to jobs in this space. So, you know, looking at what women have done, over the course of history is what inspires me to say, okay, this new era that we’ve entered, how do we want the role of AI to be played in our society, in our business, in our government institutions,  education systems, it’s impacting everything.

So, I’m looking, actually, I need that inspiration from those women that thought differently, right? We already know the men that have made the change. And you know, I’m… I am not a feminist. I actually do believe that, you know, for women to win, men don’t have to lose. We just have to make sure we’re looking at things in a human manner. But, you know, I do try to draw up and make sure that I am, like I mentioned, you know, Ada Lovelace and I mentioned, you know, Timnat Gibru, even, you know, political reformers, social reformers, I think they all need to have a seat at the table, in order for us to imagine what this new world is going to look like.

So I read a lot on anthropology and history to really understand that because this is bigger than just technology, right? This is bigger than just the capitalism of deploying these systems into government and enterprises. Who are the women that are speaking? And I can tell you there are, when people tell me, oh, I can’t find… women to hire in, you know, technology fields, I say, I can introduce you to 200 or they’ll say, well, there’s not been enough women. I said, here’s more that I can show you, but you have to be aware, right? To your point around that awareness and education around it. So, we got to look back in order to look forward as well, with women who’ve done this before

How do we address gender bias in AI?

PHIL:

Okay, my next question, my last substantive question. It’s a fairly complex setup, so just bear with me. For this question, I’d like to focus on AI bias. And here are some things that have been written on that subject. From the Harvard Business Review, “AI systems learn to make decisions based on training data, which can include human bias decisions or reflect historical or social inequities.” From the UX collective, “bias in AI is a mirror of our culture.” From AI researcher, Brandon Lewowski, “the reflection of societal biases in AI-generated content is not just a technical issue, but a societal one.”

Now, as I mentioned in the introduction, you are the G100’s USA country chair for robotics and automation. And the G100’s vision is to create an equal, progressive, and inclusive environment for women worldwide. To my mind, this puts you in quite a unique position. You’re right at the intersection of what Brandon Lewowski described as not just a technical issue, but a societal one. And with all of this as context, how do you think we address gender bias in AI? Is it urgent to address gender bias in AI specifically as a high priority? Or do you think that perhaps achievement of G100’s broader societal aims might make it possible to solve the problem of AI gender bias indirectly?

JOTI:

I think it’s going to take work at both ends and here’s why. So, in the trenches of folks that are working with these generative AI systems, who are dealing with this dirty bias data, which is what’s driving, you know, it’s fuel for these models. We are going to need human intervention to make sure how do we take the bias out?

And thankfully, I believe there’s also technical solutions to that, but it will take humans to actually ask the question saying, hey, before you use this data for training, is it biased? I always draw the example to illustrate for folks to say, you know, not only is it the right thing to do to remove the bias, but economically it’s the most profitable thing to do. And here’s why. If you want this technology to serve consumers and consumers who’ve got green in their wallet or digital, you know, digital wallets, you’re going to have to make sure that you appeal to them. And you can only appeal to them if the experience that they’re getting from these AI systems reflects who they are.

If you have bias in the system, for example, I give this example that everybody gets it. If you have a young white man in Silicon Valley training one of these GPTs on working with an African American woman; a 50-year-old African American woman who’s going through menopause; you know that there’s no way that young white male would understand what it’s like to deal with. So, if they don’t actually look at the data to make sure it’s not a turnoff. They’re not going to adopt, you know, that a hundred thousand bots that died on Facebook. That’s what we’re going to see happen here.

So, I always draw from the win-win, right? From a capitalistic standpoint, it’s like, if you don’t do this, your service, it will actually not generate your revenue. Everybody gets that. And so they say, okay, what do we do about it? But I said, well, you have to think through how you are gathering the data, how you’re cleaning it, the folks that are assigned, like if you, you’ve probably seen this, how OpenAI trained ChatGPT initially, right? It was a blend of scraping, you know, whatever they could get access to, on the web, but also a human expert. There’s actually an article I read that came out on Wired Magazine today about the human experts that were paid as low as $30 an hour. They had nuclear physicists and they had linguists that were told, and they didn’t know what they were doing because they were these middleman companies that said, you know, go online, answer these questions.

There was actually a mathematician who was correcting, you know, calculus formulas on there. And so humans were being used to make these things more intelligent. But if the creators of these models don’t have the intention of removing bias, the bias won’t get removed. Which means these beautiful systems that are being released to the world contain all that bias. And what will happen is it will perpetuate racism and hate and all the bad things that humanity has created so far because they reside in it. And by the way, there is technically a challenge for them to guardrail against those things. There is a site called jailbreakchatgpt.io that’s making these models reveal their racist sides. But you know, these models are not human. They’re being trained with that.

So, you’re going to see that fallout and the way, you know, we’re positioning this with the companies and organizations we work with. And I said, look, you have to think about this. Don’t blindly go plug, you know, these closed models into your systems because you really don’t know what they will do in the field. And there’s not enough guardrails in this world and on the planet that technically can stop it from doing it. So, um, from a societal problem, yes, it is going to make sure, for example, loans, right? We know that historically mortgage loans, uh, they’ve been folks from the, you know, the diverse community, the folks that are non-white, not getting enough, African-Americans, for example, in particular. That would perpetuate for sure.

But at some point at the same time, those wallets where folks have the spending power, if they see something come out that is egregious to their sense of ethnicity, they will not use those products and services. It’s a double-edged sword. Does that make sense in sort of the complex question and the setup?

PHIL:

Yeah, absolutely. And I think the core of that is that we do need to address it from both directions.

JOTI:

Correct. Oh, I actually do want to add one more thing because you talked about the G100.

The massive opportunity, green field opportunity that this era of generative AI has brought upon us, and I’ve talked about this for the last four years as I started to see the massive impact, is for women and for minority communities to come into this field, for a couple of reasons. From the highest level of statistics standpoint, the World Economic Forum came out with a report in 2020 that they updated in 2022 saying there are 97 million new jobs that are coming in this era of AI that we’ve stepped into and 85 million existing jobs that will be eliminated. Fast forward, we’re in 2024. We’re already seeing the start of that, right? With the layoffs that have been occurring.

So, what I’ve been telling folks, and I’ve actually been doing a mentorship and training program for the last, since 2020, the summer of 2020, when everything was in lockdown, is could Freshriver train women and minorities in this technology? You know, you can’t take existing developers and UX, UI designers and put them into roles to deal with generative AI, because it’s a very different technology they’re dealing with. And so, I tell women, I tell folks of different minorities, say, if you feel that you’ve been left behind in the previous generations of all the technological advancements that we’ve had, this is your time to start to enter a field and gain a foothold because those 97 million jobs, guess what? Everybody’s got to figure out what role they go to play. And those roles have not even been defined yet.

When I first started in this industry five years ago, as an independent consultant, the job descriptions that would come in would have three lines on them. And when I would talk to the hiring manager, they like, we’re not quite sure what, you know, what we’re looking for in, but you’re who we want. This happened over and over and over again. And so, I tell folks it’s an open season for everybody. And so, imagine a world, if we fast forward five years from now, we’ve got equity in the jobs. It’s a great opportunity, I think.

And so, the G100, that is what we’re aiming for across the world. You know, women leaders from every different country you can think of that’s part of it is putting programs into place to train more women, young girls, all the way down from K through 12 to universities. And if we push harder and we build momentum, I think we will land because there are 97 million jobs that need to be filled.

Where are the 97 million jobs coming from? What are going to be the key roles that this is going to produce?

PHIL:

Okay, that was going to be my very next question. The layoffs are so easy to see. They’ve been coming in huge waves. The new jobs are a little harder to measure. Where are the 97 million jobs coming? What are going to be the key roles that this is going to produce?

JOTI:

So, it’s actually what we’ve just been talking about, right? Who’s going to clean the data of bias? Who’s going to train these systems to make sure they don’t hallucinate? Who’s going to continually test these systems for their lifetime to make sure that they are performing the job that they were meant to perform and they haven’t gone sideways? Product managers who need to dream up how to work with these technologies, right? Where does it make sense to deploy them? Developers who have, we’ve talked about deterministic software, right? Decision trees, which these systems are not. How are we going to have developers that understand how to do that?

You know, the layoffs that have come have been technical roles and all those roles have to be retrained. There is no university or a place of education, I guarantee it, on this planet that can teach you how to do this. AI, generative AI is about doing, right? So they’re literally emerging based on the problems we’re seeing, whether it’s a bias or whether we should deploy this technology.

Just because we can doesn’t mean we should. So, I’ve always said this, the last five years I said… We need to have as many non-technical roles at the table as technical roles. So, we need philosophers, we need social scientists, we need linguists. By the way, those are the folks that are on our teams. When we’re working on these projects, we’re making sure that it’s not just the engineers or the MLOps folks or the data analysts that are making these decisions. We’re making sure the pods that work together to bring these systems to life for our clients.

There is an equal voice at the table of folks that are considering social impact. And our job is to advise our clients to make sure, by the way, we know you want to do this, but here’s the repercussions of it. The decision is yours, but it’s our job to uncover that impact back to emotional intelligence, back to ethical intelligence, right? Should you deny, you know, healthcare benefits for a 80 year old? Because now they’re looking at the full history and the risk, right? Just because they could gather up all the data from the fact that they’ve been ill and they’re denied. Should you? Right?

PHIL:

I think the answer to that should be very clear to anyone but of course it isn’t.

JOTI:

A lot of gray areas when you’re trying to be, you’re being pulled in three different directions. Economics, ethics, and emotional intelligence. Not easy, not easy to do.

PHIL:

Yeah, but I’m glad that you’re approaching this with a perspective of optimism because I think that’s going to be important to achieving these goals. Otherwise, we’ll be defeated before we start.

JOTI:

Exactly.

How will AI and technological advancements impact the world over the next five years?

PHIL:

So, I have just one last question for you. If you were running this interview, what’s the big question that I’ve forgotten to ask and should have asked and what’s the answer to it?

JOTI:

I think the question would be what do we see the world be like in five years, given the trajectory we’re on? Is it dystopian? Is it utopian? Because we’re hearing that from, you know, as much as, you know, Bill Gates and folks that are sounding alarm bells to… You know, Sam Altman saying, this is the most beautiful thing ever, and Elon Musk, right? What’s the kind of world do we imagine we’re going to be in? The answer to that is again, back in history is how human behavior, which has not changed over the last 70,000 years of, you know, when we were still Neanderthals, right? I think we will find a place of balance.

I think it will be violent, I think there’ll be moments of beauty. There’ll be moments of thrill when we solve, for example, health care issues, you know, developing cures for diseases that have played this, you know, Parkinson’s. I’ve got, you know, my mom and my mom-in-law both have Parkinson’s. Like in our lifetimes, even in five years, because I do believe this is how fast it’s going to move, is we’re going to see the best of and the worst of.

And my optimism is that because we cannot kill this technology, we shouldn’t. And not that that will happen because the trains left the station here. But I do believe that worlds will be a little bit better than where we’re at solving those problems. But there will also be a lot of angst that we’re going to see. So, the five-year view is still more optimistically heavy, but we’ve got a lot of work to do before we get there.

PHIL:

Yeah, well, I sincerely hope that the optimistic part of that is absolutely true. And I would love to see a little less violence and turmoil globally. It’s a sobering thought that it might be worse in five years than it is today. But I sincerely hope that that’s that part of it is not the case.

Joti, thank you so much for spending time together today. It’s been great talking to you, great hearing your insights and your perspective on this. As I said earlier, you really do sit in a very unique position at the intersection of technology and society with that capacity to talk about that. And as you’ve said, the intersection of society and technology is probably where the successes for this technology are going to be driven. So really appreciate this. I’m sure people will enjoy hearing your perspectives and thanks again for being here.

JOTI:

Thank you for having me and it’s been my pleasure.