Demand for generative AI services increases, Vellum receives $5 million

Posted On
Posted By Goprogs Blog

This morning, said it closed a $5 million seed round.

The company declined to say who its lead investor was for the round, other than that it was a multi-stage firm, but told TechCrunch that Rebel Fund, Eastlink Capital, Pioneer Fund, Y Combinator and several angels participated in the round.

The startup first caught TechCrunch’s attention during Y Combinator’s final demo day (Winter 2023) for its focus on helping companies improve their generative AI.

Given the number of generative AI models, how quickly they are advancing, and how many business categories seem poised to take advantage of large language models (LLM), we liked his focus.

The market also likes what the startup is building, according to metrics Vellum shared with TechCrunch. According to Akash Sharma, CEO and co-founder of Vellum, the startup has 40 paying customers today, with revenue growing at around 25% to 30% per month.

That’s impressive for a company that was born in January of this year. Normally, in a short funding update of this kind.

I would spend a little time detailing the company and its product, focus on growth, and move on. However, while we’re discussing something that’s kind of nascent, let’s take a moment to talk about rapid engineering more generally.

Vellum Building
Sharma told me that he and his co-founders (Noa Flaherty and Sidd Seethepalli) were employees of Dover, another 2019-era Y Combinator company that worked with GPT 3 in early 2020 when the beta was released.

At Dover, they built generative AI apps for writing recruiting emails, job descriptions, and the like.

Noticed they were spending too much time on their prompts and couldn’t edit them in production or measure their quality.

Therefore, they needed to create tools for fine-tuning and semantic search. A huge amount of manual labor added up, Sharma said.

This meant the team spent engineering time on internal tools instead of building for the end user.

With that experience and the machine learning operations of its two co-founders, when ChatGPT was released last year, they realized that market demand for tools to improve AI’s generative prompting “was going to grow exponentially.”

Hence Vellum. Seeing the market open up new opportunities for tool building is nothing new, but modern LLMs can not only change the AI ​​market itself, they can make it bigger.

Sharma told me that until the recently released LLMs, “it was never possible to use natural language [prompts] to get results from an AI model.”

The move to accept natural language input “makes the [AI] market much bigger because you can have a product manager or a software engineer literally anyone is a fast engineer.”

More power in more hands means more demands on tools.

On that topic, Vellum offers a way for AI prompters to compare model output side-by-side.

The ability to search for company-specific data to add context to specific prompts, and other tools like testing and version control that companies may like to ensure their prompts spit out the right stuff.

But how hard can it be to challenge an LLM? Sharma said, “It’s easy to spin up an LLM-powered prototype and run it.

When companies take something [like this] into production, they realize there are a lot of edge cases that tend to give weird results.”

In short, if companies want for their LLMs to be consistently good, they will need to do more work than just skin GPT outputs coming from user queries.

Still, it’s a bit generic. How do companies use subtle challenges in applications that require rapid engineering to ensure their outputs are fine-tuned?

Sources: Techcrunch |

Related Post

leave a Comment

Skip to content