In the ninth episode of Speaking of AI, LXT’s Phil Hall chats with Meeta Dash, VP of Product at Verta.AI. With a background in marketing, Meeta has a unique stance on balancing product management and product marketing. She shares her insights on Verta’s approach to product development, the different trends driving AI progress, and the obstacles standing in the way of AI reaching its full potential. As we close out Women’s History Month with this exciting episode, tune in to learn more about generative AI trends from the lens of a tech product development expert.

Introducing product management and marketing extraordinaire, Meeta Dash

PHIL:

Today’s guest started her career as a software engineer, but after uncovering a passion for product, she completed an MBA from UC Davis with majors in marketing, technology management, and strategy. Over the past 20 years, she’s held senior product management and product marketing roles at organizations including Infosys, Autodesk, and Cisco. In 2018, just weeks after I left Appen, she joined the company and led the product integration and rebranding effort to bring Figure 8 and Appen together. She is currently VP of product at Verta, where since October 2020, she has led product strategy for Verta’s Generative AI Workbench, setting the vision for their MLOps products. Please welcome today’s guest, Meeta Dash.

Meeta, it’s lovely to have you here.

MEETA:

Thank you, Phil. Nice to have here and looking forward to this discussion.

How do you design and deliver an end-to-end strategy for the development of products?

PHIL:

Great. So, your role, at least as I understand it, is to design and deliver an end-to-end strategy for the development of products to maximize existing customer bases and engage further customers. How do you do this?

MEETA:

Yeah, so product role is generally very interesting, specifically in the tech sector. So, there is like, sometimes you start like a product innovation from scratch, which we kind of did at Verta, where we saw a really big market pain point around operationalizing machine learning. So, folks have a lot of models in development. They have kind of, they figured out how to train a model and make it ready for production, but then what next? How do they move those models to production, run it at scale, and guarantee safety and quality, right? So from a product standpoint, it was very interesting to see that market pain point. There was no platform or tooling that could help folks standardize that process. So just to give you an example of Verta, the way we build a product strategy is really first understand the customer pain point, understand there is a market gap… And then apply technology.

I’m a big believer that technology comes next. The first thing that comes is a customer pain point and a market need. And then technology, you figure out this technology is a good fit. So that’s kind of one of the approach that I generally take that works well. And then when it comes to scaling a product, you have a product and then you’re looking for growth sectors, that’s another area where, again, technology and new trends play a key role. If you think about generative AI right now, one and a half years back, generative AI was nowhere, right? Then this technology came up and then everyone saw a big opportunity and that’s where the product and management and product strategy teams need to figure out that, hey, there is a technology that can really play a big role in innovating and taking the market forward. 

How do we figure out, how can we build the right product around it? And to address that need, again, that is, I see a lot of need to understand again, do more user interviews, understand user needs, and also worry about the user experience because most of these products really deal with, like your end users. It could be a business user, it could be, you know, like in a B2C space, it’s a normal consumer, but then having a right experience, building a right user experience is really critical because when we’re building tech products, it can be very complicated, so the adoption will suffer.

So when you’re thinking about growing and increasing adoption, then in my view, thinking through user experience, not just technology, also is critical. So in a nutshell, I would say understanding market pain points, customer needs, and building the right product with the right user experience that can really help scale and make the user sticky. That’s kind of the role of a product manager.

With your background in marketing, how often are you blending your marketing expertise with the product development and creation process?

PHIL:

Yes, certainly making the product sticky resonates very strongly. On a related note, you are a VP of product with an MBA in marketing, and that makes sense. But…If it’s the role of the VP of product to ensure that the products your organization creates are a match for its target audience, how often are you driving the creation of products that your target audience knows it needs and how often are you putting on your marketing hat and selling something that you know they need, but they perhaps don’t?

MEETA:

I think with machine learning and AI space, it’s kind of 60-40. Forties, they know I need it. 60% is they are struggling, but they don’t know whether that is a right tech or product fit. So it’s a combination of both. And then, but the new technology, it’s like, you have to sell the future, sell the vision and talk about, hey, there is this possibility. And for that, like creating like proof of concepts,  showing, like, what is feasible really helps a lot.

But if you work closely with users, you understand their problems. And then you come up with some ideas and then you show them, hey, this is possible. And then they kind of lighten up, wow, I have never thought about it. So that’s kind of a really interesting experience that I have gone through in the past few years.

Is the combination of a product and technology background with marketing common in product development?

PHIL:

Is that combination of product and technology background with marketing, is that a common combination or are you a fairly unique case?

MEETA:

I would not say I am unique, but I’ve seen product managers coming from different backgrounds. I’ve seen folks who have, I have team members who come from a completely humanities background, not any engineering. So marketing, maybe having a tech background helps, but then from other streams also, like folks who come from pre-sales or sales. And they do it really well, really good job.

So, at the end of the day, it’s about like, there is some intuition, like you have to think of when you’re talking to customers or users are looking at the market. There is like hardcore data that you look at, but then there is a future intuition that you build that helps. And then I feel the most important thing is really user empathy. Like if you no matter you’re from sales, from customer success or from engineering, as long as you have user empathy and you’re trying to solve their problem versus thinking about, okay, I need to sell this product. You’ll build really great product.

Are you building solutions for a better-informed audience today than you were in the past?

PHIL:

Yep, that certainly makes sense to me. LXT – the company I work for – has published an AI maturity report. We’ve done it annually for we’re just about to publish our third edition. And that was, sorry, just about to we’re recording this ahead of time. By the time this interview comes out, it will actually have been released.

Now, one of the key findings is that the proportion of organizations with a clear AI implementation and utilization strategy has grown steeply. A year ago, it was a minority of organizations. Now it’s a clear majority. Are you seeing a similar change? Are you building solutions for a better-informed audience today than you were in the recent past?

MEETA:

I think for last, I don’t know about last year, but definitely when I was at Appen and Figure 8, the landscape was very different than what it is today. I mean, we were at that time, we were seeing a lot of folks who were just thinking about building ML models and running POCs, but now folks have a clear strategy. They’re thinking about governance, risk control. They’re thinking about like how to run at scale in production, so those problems are, were not really thought about two, three years back. So definitely that’s a clear signal that we are getting from the market.

The other trend that we are also seeing is GenAI. I mean, it’s making AI much more accessible to companies. That could be one trend that is also impacting that. Folks who have not traditionally thought about machine learning, now they are much more aware of it.

PHIL:

Well, that’s a good segue into my next question. So our survey is focused on AI in the broader sense, but it has showed that currently the majority of companies are placing a higher priority on generative AI products than they are on other AI projects. Is this just a result of hype and what’s in the news or is it actually directly related to goals and objectives and practicalities?

MEETA:

To be honest, I think it’s a combination of both. There is definitely a hype which helped like companies really being challenged, because no matter which industry you are in, whether it banking, insurance, healthcare, you will stay behind, and you’ll not have an competitive edge if you are not in this space, right? So generative AI, the hype helped.

The other thing that really helped is like all the OpenAI and ChatGPT, right? The consumers have access to it. Now internally, the companies who have lagged behind, their users have access to ChatGPT and they are using it. And then their IT team is  waking up and thinking, hey, Now we are, folks are using ChatGPT, we should block them or maybe we should have a strategy in place to make sure that if they are seeing value, we should move this forward. So I think those strengths really helped a lot.

What are your views on the availability of quality training data?

PHIL:

The survey also showed that one of the most significant bottlenecks for generative AI deployment is the availability of quality training data. It also indicated that the vast majority of respondents expect the need for training data to continue to increase. What are your views on this?

MEETA:

So you’re talking to someone who has worked in the training data platform for years. So I have seen the pain point around it. I have actually built Chatbot in the past at Cisco. And we literally that time struggled a lot with getting quality training data. And that problem was eventually solved.

So, it’s directly, like the quality of your product is directly related to the quality of your training data, the amount of training data. And then it’s not just you don’t pause when the model is in production, you continue to do that because the model encounters so many edge cases with GenAI.

These models are generative, so, there is a high likelihood of hallucination, unpredictable outcome. So you keep evaluating and you kind of make sure that your data is, your models are really fine-tuned for your use case because you are using a general-purpose model and that’s not going to work for you all the time. So I think the need for training data, if anything, it will increase and the quality, I would stress more on the quality of the training data.

What are the biggest obstacles to AI reaching its full potential?

PHIL:

Okay, that certainly resonates with what we’re seeing in the marketplace, that’s the case, but obviously for a business that works in this space, we’re always very interested in the answer to that question. So what do you see as the biggest obstacles to AI reaching its full potential?

Are they technical, regulatory, financial, something else?

MEETA:

I’m less worried about financial because their technology can solve. Like I know there are folks who are thinking about cost and large models. How do you run them? So that technologically we can challenge, we can solve those challenges. I think what worries me, what I have seen, like we have been in a lot of executive round table discussions. And the trend that I’m seeing is folks are trying either AI or generative AI proof of value projects inside their organization. They’re seeing benefits. But where they’re stuck at this point is they do not have the right governance or right framework to ensure that they can follow safety practices and ensure quality and compliance in the right way so that these things can actually be productionized.

So I think that’s where a lot of companies are kind of dragging their feet, right? In some cases, they’re fine using these for their internal tools, internal purposes, but for external use or for high-risk scenarios, I believe that is the current bottleneck. So I think governance and those controls and quality is a big challenge. Regulatory, yes, we have so many regulatory rules that’s coming up. So that will pose a threat for sure. I think it’s a good thing. I think it’s not a threat, it’s a good thing that it will just keep us on our toes, to make sure that we are following the best practices.

How environmentally sustainable do you think the continued growth in the large language models domain will be?

PHIL:

Great. Another potential blocker that I’m seeing, I’m not seeing it addressed with high frequency, but I’m but I am seeing it come up. So forgive me if this is a fairly long setup to the question here, but it’s the environmental cost. So a recent MIT report quantified this in fairly frightening terms. They said the cloud now has a larger carbon footprint than the entire airline industry. A single data center might consume an amount of electricity equivalent to 50,000 homes. And training one AI model can emit more than 300 tons of carbon dioxide equivalent, which is about five times the lifetime emissions of an average American car.

Now, I haven’t fact checked the precision of these numbers, but it’s MIT, so I’m expecting that this is reasonably well researched. And in any case, directionally, it is well documented that contemporary data centers in particular, in general rather, and LLM generation in particular, require massive compute power, and they have a correspondingly large carbon footprint. How environmentally sustainable is continued growth in this domain? Do we reach a point where we simply can’t capitalize on the power of the AI, the large language models, without actually paying a huge environmental debt?

MEETA:

I think this is a big problem right now. I kind of acknowledge that and I have also read a bunch of articles around it. So one thing that our company that we are investing right now, I think that this is very relevant in this space is large language models are great, but there is an increasing trend towards can we achieve a similar quality with smaller models? Can we use large language models and distill it down? Task-specific or use case-specific models. And that’s something that I was reading from Microsoft. They are also exploring those fields.

So I think there will be a trend where folks will start thinking about smaller models and achieve similar generative capabilities. And I think we all need to acknowledge that this is a, there could be big companies who can afford to run those data centers, large data centers, but is it really the path forward for us, or do we need to be more innovative about it and solve this problem? So we are actually solving that with model distillation and we are seeing increasing trends and I see next two years there will be a focus on cost control and like being more environmentally conscious about these models.

PHIL:

Yes, I suppose. And if you think about it, linking the environmental concerns to cost control, it might be hard to persuade people to address environmental concerns, but if they are addressed by addressing cost, it’s not hard at all to convince people.

MEETA:

That’s true, that’s true.

Who are some of the women that have inspired you throughout the years?

PHIL:

Yeah, I’m a firm believer that alignment of interests can achieve all kinds of things. You are doing good for the environment, but at the same time you are financially much better off.

So March is Women’s History Month. And I think we were we’re planning on publishing this interview on the last day of March as a closing point for Women’s History Month. Who are some of the women that have been particularly inspiring to you?

MEETA:

I’m inspired by a lot of women leaders and philanthropists. Like I follow them over LinkedIn and Twitter, like folks like Melinda Gates and Michelle Obama, being a few of them.

But again, in my view, I’m really fortunate to work with really good women leaders, peers, and my team members. And these women really inspire me a lot more because I work with them day to day. I see their struggle, I see their challenges and I see how they overcome those challenges and kind of pave their own ways.

So those are more realistic women who I interact with day to day and I get more inspired when they, even if the wins are very small, I see them happening, and I get inspired, to give some examples: like I had a team member who was new to product management and she was struggling with public speaking. And I saw how she challenged herself and overcame that and she became a really star player in the team. So I’ve seen folks who have struggled in their personal life and they kind of managed both their career and their personal life really well. So I think I get more inspired by day to day women that I interact with a lot more.

As a technology leader, what does the representation of women in technology look like to you?

PHIL:

Yeah, fair enough. I work with a pretty inspiring team of women myself. So I completely get that. Now, I have a generally optimistic view about social evolution. My optimism is shaken every now and then, but I take a generally optimistic view. When I spoke to Pradnya Desh earlier this month, I asked her how she felt about the representation of women in technology and whether she thought that proactive affirmative action was needed. She responded by pointing out that only 2.2% of venture capital in the US goes to companies founded by women.

Does this resonate at all with your own experience as a technology leader?

MEETA:

It does. Fortunately, again, I’m working in a company which is a female-led company. She’s a founder and CEO. In the past, I had my managers who have been female leaders, but across the board, I’ve seen, I have been in meetings where there are 20 men and I’m the only female in the room. So I’ve seen that happening a lot in my career. So in AI, in tech and in AI, in general, like last few years, I’m seeing more women coming into leadership roles and other roles, but still the growth rate is much slower than what it should have been. And in the VC and startup space, for sure, it’s much, it’s even weak.

PHIL:

Yes, I can remember being particularly inspired in some years ago when Justin Trudeau became Prime Minister of Canada and he was asked by a journalist, the question was, one of the priorities for you was to have a cabinet that was gender balanced – why was that so important to you? And his answer was because it’s 2015, which I thought was a great and inspiring answer, but it was 2015 and it doesn’t feel like we’ve made as much progress as we should have done. So I tend to agree with you that things are perhaps unrelentingly moving in the right direction, but it would be great if they moved a bit faster.

How do we make sure that we are working towards a world where the intelligent systems that we are developing are working in favor of humanity and not against it?

PHIL:

So my last question, what is the big question that I should have been asking you, but perhaps I haven’t? And what is the answer to that?

MEETA:

That’s an interesting question. We have covered a lot of grounds. There is one trend, like a lot of talks happening not just in tech space but in general in the consumer world around AGI’s, right. The automated general intelligence, all the things that happened with OpenAI in you know last year that thing became very prominent right where we are thinking, folks are thinking, the machines will be smarter than humans. These intelligent agents will create more agents and this kind of, there can be a scenario for a sci-fi movie that can happen to us.

So I can talk about that, but I would also like to hear your opinion because you have been in the training data space. You are more close to the AI world as well. You’re seeing what’s happening in the field. So you can also relate to that. I think it’s a very interesting conversation and we should all acknowledge that there is a possibility not right now but in future, how do we make sure that we kind of work towards a world where… the intelligent systems that we are developing really works in favor of humanity, not against it, right?

So all the things that we are discussing about governance or compliance, regulatory rules that we should follow, those things need to be discussed more and not just from an angle that here, is it good for my business? Is it going to make financial benefit for me versus whether this is something that can help me, help us in future as a human race. We are doing greater good for the humanity.I don’t know if you have any opinion around it.

PHIL:

I always have an opinion. Well, during the course of doing this interview series, related questions have come up fairly frequently, and I have generally been somewhat surprised that the people I’ve spoken to have tended to be tended to downplay that and say, look, it’s just not really a concern. And they’ve given a very optimistic view. And these are people that I have a very high level of respect for. I choose my guests carefully.

So yeah, I have harbored some sort of robotic doomsday concerns. I read Isaac Asimov as a kid. It was his books were always this, there was always this theme running through thereof, having guardrails to keep robots in service of humanity and finding interesting ways for those guardrails to break down and yeah. The experience of reading those things is still with me. I can’t help but look at this and think Isaac Asimov might have been right on the money writing these things, probably in, I think, late 40s or even 1950s. The scenarios he imagined were remarkably close to where we find ourselves today or where we might find ourselves in the near future.

So yeah, I probably harbor greater levels of concern than most of the people I’ve spoken to. I see scenarios where technology is always going to take instructions literally, I guess. And when you do take things literally, all kinds of bad things can happen. People write good laws with good intentions, and then you find ludicrous instances where judges are forced to treat the law as it’s written, and as it’s written nobody foresaw the situation and it’s being literally interpreted and the outcome is not what the lawmakers had in mind.

I won’t dive into US politics and things related to that, but yeah, I’m sorry, long answer, but yes, I think that, I think it would be naive of us to not keep this somewhat front of mind. I certainly don’t want to stop progress. I love all the upside of what we’re seeing. And I think that the upside can continue to dominate. But, yes, we should be cautious. And I’m a risk taker by nature, by the way. So it’s something else for me to say. We should be cautious.

MEETA:

That’s perfect. The outcome may not be as dramatic as being pointed out, but there is a risk, that’s what we need to acknowledge.

PHIL:

Well, Meeta, thank you so much for joining us today. It’s been great to speak with you. I’m sorry that we actually missed the opportunity to work together six or seven years ago when we nearly overlapped. And I hope we’ll interact together in the future. But thanks again for doing the interview.

MEETA:

Me too, me too. I really enjoyed speaking with you and I’m glad that you have invited me.