Panel Discussion: Opportunities for Hypergrowth in Gen AI

Gen AI Summit 2023 - Open or Closed

  • Snorkel AI: Alex Ratner, Cofounder and CEO

  • Spectrum Labs: Ryan Treichler, VP of Product

  • Fiddler AI: Krishna Gade, Co-founder & CEO

  • Jasper: Shane Orlick, President, Jasper

  • Coatue: Lucas Swisher, Co-COO of Growth & General Partner (Moderator)

SUMMARY KEYWORDS

models, data, ai, build, customers, companies, years, question, product, enterprise, bottlenecks, layer, generative, cases, point, bias, work, invest, incumbents, people

SPEAKERS

Krishna Gade, Alex Ratner, Lucas Swisher, Shane Orlick

 

Lucas Swisher  00:16

Hello, everyone. Thank you for joining all of us. We are very excited to have a number of great founders and execs here on this panel to discuss hyper scaling in this current environment. Obviously, things move very quickly in AI these days, every 30 seconds we get a new press release or a new product announcements. And I think we have for the sort of world's experts in and around this category that can help us think through some of the challenges around hyperscale. So just to kick things off, I figured I'd open up a question for the for the entire group. There's obviously kind of stacks emerging around AI, you have the infrastructure layer, the language models, the tooling layer all the way up to the application layer. I'm curious what folks think are some of the areas that need the most improvement or where there are most opportunities and where folks think the value might improve in today's

 

Alex Ratner  01:13

world. I can I can I can take first crack at this I'll try to be a little bit more inflammatory because it's interesting because the last two are gonna move in really fast, like three Jenai panels this week, and then there's too much agreement. So let me see if I can strike some interesting disagreements. So I think there's an infrastructure layer which, you know, I'll be kind of broad bucket includes everything from the hardware to the, the, you know, the platforms for doing what we think of as model center development, training, etc. And then I think there's a critical data data center layer that's was introduced, introduce myself or Okay, so, if you look me up, you'll see where my bias is around what we call data centric data, but that data layer is relevant for both generative but also what we often call predictive AI, some of the labeling classifying tagging but you know, traditional business value is based on and then you've got kind of application layer, which for predictive often looks like traditional ml ops, for generative. It's a whole new set of kind of co pilot style stacks and other ones that are still inventing. But, you know, I'll make a you know, a more declarative thing that you know, everything but the data layer and maybe the, the UI layer for generative, this effectively commoditize the models, the algorithms, the infrastructure for basic stuff, all but the most specialized, bespoke end use cases. It's all about the data these days. We're seeing tons of examples in the environment where you know, a language models are largely because released every like five seconds, now we're contributing to some of the open source ones, you know, that couple 100 bucks and you can siphon off all the learnings of, you know, a closed source model into the open source. Everyone's using the same models. Everyone's using the same algorithms. And we were this a paper affiliate faculty University, Washington, one of my students there with Apple and Google and instability and others who just released a benchmark on data comm. This is my last point I'll leave off but we fixed everything in the code from you know, basic data load with loading to model architecture to training for a training clip. On it's a multimodal image text model for those that are familiar. And just by playing around with how the data is curated, filtered, clean and sampled, you have new state of the art computer, everything's up and high, it's ever. So what's the point here? I guess my claim would be all the layers other than the data. And in particular, if you're interested in enterprise, the domain specific, you know, private enterprise data and knowledge, I think is the your app or rapidly getting driven as a commodity, which is awesome for the space but it's interesting and safer for hyperscale and when you're when you're not, when you don't have some kind of advantage around unique data or knowledge.

 

Krishna Gade  03:58

Please take a second and either you have the missing link in digital DVI world is the layer that can signal provide safeguards and guardrails for Responsible AI usage. That's basically our mission that has been auditioning for doing for the last four years, building trust and transparency. And I think generative AI is, if not even more magical than the predictive AI has been and how do you make sure that you're taking care of safety and observability and ensure that the AI is serving in the society and serving your customers in a in a trustworthy manner? That's actually that's actually probably under investor and probably won't get invested? Because I don't think, you know, companies in the growth phase AI apps that are taking care of this issue because they think without making sure that the answers are trustworthy. You can see real personalization paradox. So that's what that's what we've been working on. Shameless plug we released an AI auditor last week to essentially do this stuff to audit large language models. Third party monitors to, you know, look for correctness, consistency, and safety issues.

 

05:14

So that's, that's,

 

05:14

that's what we believe is the descending.

 

Shane Orlick  05:18

Okay, yeah, I agree with everything you said as well. I guess today's a great day to be in video, the upper layer, but I think there are just a handful of companies that are gonna make a lot of money in that space. For us. You know, again, we're biased to Jasper is the application layer, the opportunity to come in and really revolutionize the way that enterprise software in particular the UI is work with customers. We've been talking about productivity gains in enterprise software for the last 20 years, but in reality, we haven't really delivered on that. We've made things incrementally better. But I think AI for the first time really has an opportunity to transform the way people interact with the software they use every day in their job. So that's where I get most excited is how that's going to transform the way people work. It's also the most noisy and picking the right solution and the right company to build are the right products adopted as a real big challenge right now. But that's right most excited.

 

06:13

I'll just kind of bridge the first two comments. Also the company that has been in the space for a while and I definitely see the need for that market. But the thing that actually fries the ability to do that effectively, the only way to solve it so I think it really goes back to that very first point that that fast everything else.

 

06:36

Yeah, I think that's that's a really good point. And and Krisha you kind of played into Ryan's point a little bit as well. Safety piece, which I thought was I thought was interesting. Shane, a little bit to Alex's point in question. You all obviously have a lot of models on the back end of Jasper that you either play with or put in production and I'm curious, what's your What are your thoughts on kind of the open source models versus the proprietary models? And what do you think about the sort of similarities and differences? What do you all prefer? Do you think that he's right about the monetization aspect?

 

07:06

Yeah. So I'm a huge family patient obviously. So anything that people are working on that brings additional technology into the world, as long as it's safe, as long as it's unbiased, as long as you know, it's differentiated from all the other models. I'm a huge fan of that. At Jasper, we realized very quickly that it wasn't going to be just a single model that was going to win. We actually started probably the first year of our life as a rapper on open AI. But then our customers were asking for different things, different experiences, different use cases. And so we actually built a really open AI engine that plugs into all the different models. So the more models for us the better again, as long as they're safe as long as they add differentiated experiences. We even built around models, small models, really finely tuned, that are outperforming large models. So for us, you know, we're totally open but the more the better as long as they're headed in the right spot.

 

08:01

Great, thank you. What do you all think? Are the most overrated growth opportunities and underrated growth opportunities in and around the generic Vasotec?

 

08:17

Maybe this is a knock on tuition. I don't think so. I think it's somewhat in line. To what Alex was saying. The large the large language models

 

Krishna Gade  08:28

have been trading and investing billions of dollars to train these models that say, I don't know this for the long term model. Because I think

 

08:37

eventually, you're already seeing that you can get get away with like smaller language borders on doing specific data sets, and get compatible for even better performances. So I think a lot of the questions what happened last few months on seven different companies building alliances were something that's that's where I'm not.

 

09:01

Yeah, I'll post one and I'll save a little sadness because I think that some of the companies that are, you know, most known in the space and heavily invested for kind of building a large language model or foundation model. deserve a ton of credit. For, you know, pushing this whole field forward and pushing the practical progress of scaling these models up forward. But I do think there's an overestimation of how much value they're going to capture when, you know, I think there's a there's gonna be some winners. So I think generalists or specialists is a helpful, helpful access. I think there's gonna be some, you know, small one or two winners for kind of general commodity coverage, general consumer use cases. Basically what you can soak up from the internet and you and I think that'll be a winner takes all it's really all about the data and why you can collect and that kind of tends to be exponential. And then for everything specialized, where you actually have, you know, specific tasks going to be really well. We've already reached, in my opinion, a saturation point where you know, taking a an open source space, or just a micro Microsoft build yesterday or the day before they posted, so we're not anti Microsoft are great partners, posted models, and just fine tuning with your own data and knowledge on top of it or prompting or these things are on the verge basically teaching the model on top that has already been the the performance you can you can do really specialized tasks, you want to get to production, all accuracy, you can do that with a lot of logistic regression. 100,000 million times smaller can be you know, GPT for if it's trained in the right way. Alright. So I think specialists which is where a lot of value wise, we've already reached a saturation point, we're good enough. It's fairly commoditized. And then you get to what, what machines have just all these different specialized models that you tune and build based on the feedback and that's the value. So I say overrated, overrated as all that they wanted to date closed source models, and underrated. I'd say let me try to conquer controversial we're all agreeing with each other. So maybe I'll go with that controversy through boringness comments. I think incumbents in the large companies that are vertical specific that are sitting on lots of specialized data and existing workflows and users are going to benefit a ton from Australia, because they're going to take in the open source progress and then they're going to operate operationalize all that data and all that. We build a development platform for we pull data center guy all this is labeling but also data curation and collection for training the whole so we're trying to sell these incumbents the platforms to do this on their own, but I think they're underrated in space. In terms of who wins from this, Tasha.

 

11:50

Great, thank you, Alex, you've talked a lot about how data bottlenecks can get in the way and enterprises actually being able to get to the power of these AI models. What in general, do you think are the big barriers and that missing, like the last mile of actually getting an AI in production? Because we talk a lot about, you know, AI in production, and we don't see a lot of it quite yet. There are a lot of amazing demos out there. But how do you actually get that into production and by users? What are the biggest bottlenecks?

 

12:20

Yeah, it's a great question. And and I want to start by saying that like, you know, despite some arcane crazy Norte data is the bottleneck right? It's it's actually well represented by this panel, and it's, it's explainability interpretability. It's the right UI interface discussion because your generative paradigm for we're inventing new ones by the day beyond just you know, running model predictions, trust and safety. And so, there are lots of these practical bottlenecks, especially in larger regulated industries, for example, government, healthcare, etc. But data is across all these verticals, is definitely one that often is what stops getting from the proverbial game, like 9095 99 for production. So, you know, one of the things that ways that we've been talking about so my co founder Chris Ray is a professor at Stanford and helped found the Stanford Center for Research on foundation walls. Were the reasons why we say foundation model. One they're not just language says any graph structures, any data, not just changing focus to the type of getting more and more cases get built on top of these but number three, so these are usually first model tools. They get you to the 80% use case for internal tasks that are not easy to pass on, but they're you know, trying to do extraction from some documents in a bank or you know, something from satellite images of a government agency like all these bespoke use cases, these things are an order of magnitude improvement in first file. But the last mile is still but blocks to production, most enterprises can show a model that gets 20% accuracy, at least outside of a co pilot type paradigm, which is still new. So how do you get from 80 to 99 599? It's all about teaching the model with data tuner about fine tuning, instruction, tuning, early jump, prompting, you know, all of this kind of converted into just about the the label decorated data model to teach them about this task. And that's where you got to walk because guess what the data science teams are usually not the team that can do this. So usually, the common story is data science team is begging some line of business partners kind of thing, the Farmers Insurance Company, you please label like 10,000 Word documents. And that's what you see like quarters worth a blockage, you know, having around. So I was That's our mission is to speed that up. But in general, I just think it's an underappreciated bottleneck.

 

14:51

And it makes sense. One of the other bottlenecks obviously is trust and safety, right? You think about the absolute explosion of content that's emerged over the last couple of years as we were able to generate basically content content, right? And how do you think about trust and safety when it comes to this new era of content generation, both in text generation but also image generation, video generation, all the different modalities?

 

Alex Ratner  15:13

It's been really interesting to watch and to talk to companies about how difficult Trust and Safety has found at home that like, if I'm going to say something bad, and you have to detect that, and now it's looking at, I'm going to prompt the model to do something about it. So the difference is, you know, we historically looked for is this adult trying to firm the child and there's things that we could look and detect patterns and find that. And now what we're seeing is, you know, people using generative AI to try and create child grooming imagery or really, really toxic stuff, by manipulating things that are out there and doing it in a way that circumvents the established protocols. And so, in the trust and safety space generative AI has just made it a lot easier for people to circumvent the tools that are already there. Unfortunately, tentative AI is actually one of the tools that we can use to kind of come back and help combat it. So we use that to create some dedicated sets to create larger models that that make it easier to detect these things. So we're seeing it be both the source of the problem and the source of the thing that we can use to go combat that because we're able to use generative AI to create the create the actual things that we're going to use to turn it on to the Texas stuff but we do see an explosion of it. And we're actually waiting to see beggars with like, the my customers are consistently coming to me concerned about you know, a rapid growth and propagation of really, really harmful content that's being created that could be created by gender. We haven't seen it yet. But everybody's kind of bracing for this concern that this is going to be spear phishing a lot more effective. It's going to make a lot of these up really toxic, harmful things where there's legitimate harms to people more accessible if you make it easier to create the manifestos to recruit people in radicalization, to recruitment. There's a lot of potentially really harmful stuff that can be out there. And so we've seen consistent requests for people looking for ways to identify that and then beyond just the like the top level stuff, there's a whole nother circuit, there's a deeper level of looking at bias in the data underneath it's been using. So you know, as you get into some of these foundational models, you can look at now and see where they put guardrails and protect against certain things, that there is inherent bias and a lot of the data that was used to train these models, and that to me is almost What's scarier for stuff like that. But it's one thing for me to say, hey, I want to create a bad picture or write a poem that prints or that that I don't want to do that but like it's a whole nother thing to say like, how do you detect that if I asked this thing about the question about gender and say, you know, a nurse or a doctor or pregnant who's pregnant, and it's always the doctors, the doctors or nurses the nurse is then you have these inherent biases based on the dream good that come out. And I think what's scary is how are those things going to manifest and you're not not looking for them in the work that's going on the underlying training data, to look for the biases to remove that further on that side? You're really intended

 

18:37

for right, I think we just did a pitch for purchase startup. But Krishna, how do you think about that problem and how we might be able to solve some of the kind of bias

 

18:46

I'm assuming, you know, so I think there are two things here, right one is a productization of trustworthy ingenuity. And so many apps require companies to invest in infrastructure or in the media. And when we started for years, you know, we were evangelizing explaining the the observability for traditional machine learning models. And companies had to build out this ml infrastructure of versioning of the models and governance around models looking into homeowners are performing. And possibly for years, we still have a lot of companies who have not even started on this journey now. Elementor Pro, and now this whole market is probably in the last row. Then you have to build all this infrastructure. to productize. So without taking care of these issues around, you know, observability or setting perimeter controls around how the orders are due, and looking for buyers and safety issues. It's going to be very difficult, at least in certain segments that we operate in financial services, healthcare or HR regulations that they probably will come to bear, you know, you need to have trustworthy apps. So you know, there's no silver bullet, it just requires to work. I was recently on a panel with Greg Peter Norvig. Was was was saying that you know, in order to in order to build a working trustworthy ml application you just have to put in the work. You just have to make sure that the model works better on your test data. Before you launch it. You have to understand how the model works. And you're monitoring it continuously. You have alerts and things go wrong. You just have to do all these things and it's it's not it's not a free lunch to get there.

 

20:27

Just to tap out really first of all plus one to Ryan's point about the danger here I mean, I think somebody for photo book pedestrian dangers, believe like Skynet taking over. Just mass scale disinformation and other things like that. I mean, it's, it's really exploding and it seems like so much more than that. I was on a panel like a year and a half ago, Google Ads AI internal conference, and I kept getting questions from folks there about, like, what happens when you know the web is still with with auto generated content? I was like, What are these people asking about? This seems very sci fi 30 Questions about like loss functions and stuff like that. But now it'd be like, if there's always this stuff is real. And then I think the point also is just it does come down to the data. Just I gotta take the bait. And it because you know, I gotta, I gotta stick to our stick. But that is that is something that's critical. I think 111 thing I like to point out. Like, people talk about biases, they talked about hallucinations. Is that the new performance is true. I mean, there's a paper that came out last night or two nights ago, just, you know, one of the NLP conferences that just talks to factuality, if you asked chat to be here before and then cross check it on Wikipedia. And you just look at all the atomic statements that it says and how many of them are right JB for say, 40%. And you can help with retrieval and and there's all kinds of things there but still, you know, this is real biases are real, but I think it's really accentuating whether it's actually I don't like the term hallucinations. It seems like mystical or emergent or surprising. It's not at all surprising. These models were never trained to do some kind of task for us to answer some kind of question in a factual and bias free way. They were trained to present statistically plausible completions of prompts based on the data set that we randomly cobbled together and so it's not at all surprising that these things actually work. bias free, error free in specific settings, surprise, surprise, you need to actually put some intentionality into the data at all stages that go into the mods. If you're just copying machines on top, it's been the same for decades. So I think that's a that's a plus one of those points, but also just, there's no necessary around it. Just it's a missing step. And the things you have to do to get serious for the deployment is get the data right.

 

23:01

Changing gears just a little bit. You're all part of companies that are extraordinarily fast for both in terms of headcount and metrics and all of these things. How, what sort of organizational challenges you all experienced and how you keep your team's focus. How do you keep tracking the best talent retaining the best talent? And maybe most important in this environment? How do

 

23:19

you keep innovating,

 

23:20

when things are moving so quickly?

 

23:23

So exciting to hear everybody talking about this final mile problem? You know, it's like autonomous vehicles, the last 10% 90% And so we've been doing generative AI for the last two and a half years, which doesn't seem like long time but in reality, it was two years before checking it and so we rolled out generic AI and content creation was this big wow moment for a lot of people and, and that played for a year and we got bored of it or customers got bored of it. And so we started really focusing on what do our customers want? What do they need? How do we actually make their lives better? And it turns out, it's not just content creation, it's actually distributing it the right mechanism is analyzing it. It's improving. It's this whole chain around this problem. And so by solving that problem, we're able to attract the most talented people, because we're not just solving this high level generic, hey, we're putting an API in our product and now we're an AI company. You know, we're actually solving real problems for real customers and making their lives better. And at the end of the day, right now, I think if you're an AI company, we're all fighting for talent. It's not an idea of million ideas in this room. It's not money. There's a lot of that if you have the right idea. It's talent. It's the people that are going to actually go execute that final mile, that final problem and care passionately about the problem that they're solving. And so by focusing on our customers, we've been able to go get good people. And once you have good people, and it can just listen, the rest sort of unfolds. But it's exciting to see people go beyond just like we put an API lead get our product and the stock went up 20% in the light, okay, who cares? Like how are people actually going to improve their you know, the way they work or the way they live because of this?

 

25:06

Just get plus one I mean, if there's a point, like even evaluation of, of how accurate are we doing with these new genetic labs on how real are our business value is a legitimate technical challenge. And so, you know, I mean, I think that says a lot about the state of where we are, you know, like, we're still even struggling to understand how to measure that, like at least, you know, got it out before you really get serious about privileges. And we are kind of in that phase where we have to move to that. And because of genetics, I'll give a quick answer. I think I think there's almost standard challenges of just growing quickly, that are just about organizational maturation, right. And there's others who speak hardcore in material sales, or from you know, founder led, which is very helpful and definitely fascinating space is something that has a rigorous process around assessing both technical and business value, alignment, etc, you know, building up the product engineering wars people like to have and there's a lot of, I think in terms of talent wars, there's a lot of you know, excitement about all go do a three person did an IC startup and just like I can hack together a demo against that proverbial 8% Which is awesome and like 24 hours from us or 20 minutes right. And then, you know, how do you keep people like that excited working on real systems that take a little bit more than 30 minutes of hacking, and not all is, you know, glory on Twitter. And so, I think that's one thing that we think about a lot is Accenture and the fact that we sell enterprise customers, we work with file types and US banks and large agencies, large insurers and health care organizations. And so how do you kind of bridge for both our internal teams who want to be happy with the cool stuff, and frankly, for our customers for data scientists so even though they sit inside of these, the heathen organizations are also watching the same Twitter threads? How do you kind of keep that balance between innovation and a very fast moving space? while also doing all the kinds of our I was just on a call with our CO sales lead, call it the boring stuff, all the boring stuff, you know, enterprise readiness and performance and stability and the last mile and all that kind of stuff? And so I think it's a pretty good answer to get just that constant balance between doing the new stuff but also staying grounded in the stuff that actually matters. 80% of our customer base 100% of our customer base in our like in the Fortune 1000 is not using foundation walls in production. And it probably isn't going to further critical highlight applications for at least a year or two, if not longer in some regulated industries. How do you keep serving that as our basic pragmatic kind of enterprise goals, while also being exciting enough, internally and externally with all your stuff? Just about balance, you know, its challenges and status for AI companies

 

28:08

struggling to want to go back and customers and on one side we're looking into how do we evaluate an audit in our labs and this banking customer has a simple LP base chatbot that uses a decision tree based routing mechanism. And they're asking can you monitor the feedback that we get from from our users and and they don't have any basic monitoring to know how many times chat bot is actually working on? Adding Yeah, there is a vast difference in terms of AI maturity and across the board, right and, and, I mean, especially AI startups like that in this innovation bot effect that is like income produces like the bread and butter business. You have to do a lot of basic things, right.

 

28:55

They'll have stuff that offset one of the things that we kind of run into is, you know, it is really easy, particularly with cell phones to get an experiment or to do something in five or 10 minutes, but turning that oftentimes we start looking at the results is like what on the surface, this looks really good. And then we start probing it's like, Nope, that's wrong. That's wrong. That's wrong. That's wrong. And then you start seeing, like, there's there's all kinds of issues that need to be addressed. We found weird stuff. We go across languages, like that. And so the ability to take that initial experiment and do all the work that turns it into something that's valuable, can sometimes be difficult because people get sidetracked by this and showed that it worked, but now it's gonna take a month to get it into production. That does not always happen. I think we also sometimes struggle with the difference between shiny object and paradigm shift where you're looking and saying like, Wow, is this a major change that's actually going to be really valuable that we need to invest time in? Or is this just something that's novel that's not really gonna move the needle and when you have a lot of creative, talented people that like to go after stuff, they will run away from shiny objects stuff. That may not be a paradigm shift and to making sure that people stay focused on that while giving them the creativity to actually explore so that you don't miss any of the big things is another important thing. I think that's a

 

30:25

it's a really important point how and I'm curious for the group, how do you whenever things are moving so quickly, and everything affects everyone sort of? How do you prioritize from a product perspective, your own roadmap, that's your own vision, versus while incorporating what else is happening out in the ecosystem? Like how do those things interact? And how do you prioritize?

 

30:49

Yeah, so for us that we want to move quickly, but deliberately, and so it doesn't have to be perfect. It just has to be fast. And then we look and see where the adoption comes from. Our customers will invest more there. But we sort of have shifted into this more structured predictive in our environment, because before we would just run a bunch of experiments because it is so easy and so quick, you can throw it out. The problem is you don't know exactly which one. And so now we actually got to the other thing is we're investing heavily in a growth team. So that's their whole function and they sit cross functionally across product engineering, sales, marketing, the whole the whole spectrum. So they're just trying to look at all metrics that for us, like we actually have now a spreadsheet we have one over 20 experiments that we're running, we make people commit to what problem is that experiment trying to solve? Is it reducing churn is increasing expansion, increasing engagement? What what do they think the point of this experiment really is? And then we make them sign a number of projection of the impact and then we prioritize from there. And so success map masks a lot of those things and it's really easy in this environment. If you're an AI company to grow really quickly and you get a lot of customers you get a lot of money or get a lot of traction or start to believe your own hype. And so for us, we got a little bit ahead of our skis there and we had to just sort of slow down and go back to some of the more fundamental basics, but it's had a big impact in the business.

 

32:16

I think that point about experimentation and anchoring those experiments on your real metrics that tie to some end outcome or business values is really, really a great one. It means I'm paraphrasing that okay, like, you know, we the way we try to just so we built a platform for data center ai plus normal flow, it's for doing labeling and other data operations programmatically in a first class right rather than you know, second class manual processes that are treated as janitorial work. That's the that's the sort of the very broad surface area because any kind of model you might want to build, including your regenerative applications, there's some aspect of data labeling your operation that goes into getting it right and so for us it's really about how we prioritize which use cases we built, right? We always have the horizontal layer but then you know, nothing is machine learning and I especially love to think of this completely agnostic. I think there's just that resistance, use case specific all the way to the collateral available and then you get your go to market teams when you're you're either done getting to that side of things. So we try to be very experimental about it. Right. It's the whole, you know, see if someone will pay for it. And then, you know, learn from that. And that kind of structure. So our teams typically go to market engineering, that would be a single thing, not just a one time thing. So we'll you know, put some stuff out there either through the academic labs that were affiliated with just some, like like the other, you know, Twitter hackers out there, and you know, they'll get this concept, but then we'll pick two or three design partners, we call them for any new use case, and we'll go through the painful process. of making sure that was the case at all, only work with people who are actually paid for and then will learn from that validate before we ever kind of commit fully on the product side. So it's a bit slower and more painful sometimes. But it gives us better actual data about where where should we go from there such a broad surface area, as for us, but for a lot of companies these days, given how potentially broad the surface area of this technology enables this.

 

34:21

Yeah, there's a marketing technique called painted door test, which is exactly that where you basically stand up a fake version of the product you build web pages for you. It kind of comes from retail, when you let somebody get all the way through the purchase funnel and click purchase. And it's like off, it's out of stock to validate that somebody's actually going to click buy. And so, you know, we use a lot of the same techniques to validate, hey, people are actually going to request information on it. There's a willingness to purchase before we actually build,

 

34:50

as I love that example. And then you can think is that you can design your product to facilitate this, right. So a lot of people we build our product, there's like an SDK, you know, SDK, notebook library layer, and then there's a UI layer. And so we can kind of do that. The marketing tests just yet fantastic. A little bit more hands on or advanced users can then graduate based on data. You build your product to facilitate this kind of experimentation user. One and one learning was we tried to build our own.

 

35:16

This has been fantastic. I think we have time for one, maybe two questions, if anyone has.

 

35:27

Hi, my name is Christine Vianney, but founder of Estonia. I have a question about this hybrid group in general, where something or other is still and they really love this concept of, you know, creating building general models, you know, for mass market, versus building some work for vertical application for existing incumbent players. And the key ones, of course, you know, collecting the data in accordance with the real advantage. So I see that raises the specter of interviewer as a founder as an individual, and you'll have different go to market on one side. These are companies like just there you know, who will demonstrate the value to the customer like in five minutes people this ad about the robot Baghdad, Israel do you see the magic robot? Yeah, 50 bucks per month. I find it but then you have this you know, we brotherhood throw right? On the upper spectrum you love dinosaurs? I'm sorry to say that, like large families like all verticals who have a lot of data inside but their cycles and like last year's like here, you either doesn't integrate very well. You'll find a lot of time. You know, long experience though, you know, understand what works what doesn't. So your team and both of these sides of the spectrum have advantages and disadvantages. Would you be more in on that aspect where we'll see more uniforms more graded? Or is are these two sides are equal?

 

37:09

So your cow take a stab at this first. Before Jasper I was historically an enterprise person. So the organizations that I was a part of were mostly enterprise motion so that long, cyclical that you're talking about? With Jasper we have this product lead growth engine that actually is now feeding our enterprise motion. So my short answer is both if you can create this flywheel effect, even if it's not your full product, but just this way to do Legion and to get people in the door. I love what you said about that tribal, like that's literally what our premium product does and what the self service product does that the product for 50 bucks on the website or it's not 29 up to 99. But for that product, if you want to just go use that that's kind of that single player motion and maybe it goes to four or five seats. And then once it gets to 10 it gets to the departmental enterprise motion it goes into a sales organization with a CS team with a marketing you know, dedicated marketing team that helps them so if you can feed them as a lot better at walk me which was our last company. We had 90 BDRs just making cold calls, trying to get meetings for the sales team to start those engagements. And it's really expensive. It's really slow. I think by the time you get into that this whole AI thing is going to be like to a whole nother point the product that you built your pregnancy. So I think we need to have both.

 

38:33

I agree with that. But I think if you would pick the last little bit of research I did there were five or six unicorns in this face, Trump was one of them and they all have a big become. I think if you're going to do this with our product, like rather than just trying to do enterprise, I think you're really selling yourself short and you're gonna have a hard time capturing market share, because so many people I've met so he was introduced to this to go try it is it's really good. It's really difficult to just put people through an enterprise motion if they can go to your competitors and try for you to try for

 

39:12

it's too slow. You might miss it. And by the time you've built that thing, and he's got the you know, some tweet came out that somebody built what you built and it's out of the box and they can just download it. And again, great. We're one of those unicorns like we're lucky. But we're not that doesn't mean a today, right? How are we going to be relevant in two years, three years, four years, five years, like that's what we're really trying to focus on and so that's where that ELT motion your customers just listen to that will tell you what to build. But you need to build something that's like unique and important enough to those customers that will pay hundreds of 1000s or millions of dollars over time. And that for you probably need a heavier interesting. Thank you.

 

39:54

That's all all the time that we had was for that question, but I appreciate everyone coming out and thank you to the rest of the panelists.

Previous
Previous

Keynote 2: Breakthroughs in LLM research and Constitutional AI

Next
Next

AI Solutions for Enterprise SaaS