Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Two claims are being made here, one boring and one lurid.

The boring claim is that the company inflated its sales through a round-tripping scheme: https://www.bloomberg.com/news/articles/2025-05-30/builder-a... (https://archive.ph/1oyOw). That's consistent with other recent reporting (e.g. https://news.ycombinator.com/item?id=44080640)

The lurid claim is that the company's AI product was actually "Indians pretending to be bots". From skimming the OP and https://timesofindia.indiatimes.com/technology/tech-news/how..., the only citation seems to be this self-promotional LinkedIn post: https://www.linkedin.com/feed/update/urn:li:activity:7334521... (https://web.archive.org/web/20250602211336/https://www.linke...).

Does anybody know of other evidence? If not, then it looks bogus, a case of "il faudrait l'inventer" which got traction by piggybacking on an old-fashioned fraud story.

To sum up: the substantiated claim is boring and the lurid claim is unsubstantiated. When have we ever seen that before? And why did I waste half an hour on this?

(Thanks to rafram and sva_ for the links in https://news.ycombinator.com/item?id=44172409 and https://news.ycombinator.com/item?id=44175373.)



This ad for Builder directly claims "Natasha" - the secret sauce that Builder hoped to sell to Microsoft - is an "AI":

https://www.youtube.com/watch?v=D36ZmJRYGK8

Engineers in the dev office say that's false and "Natasha" was a running joke in the office:

https://techfundingnews.com/fake-it-till-you-unicorn-builder...


There's no evidence for that. It's very easy to write in a blog post "they told me" or "an engineer said".

Why don't they contact the Head of AI, Craig Saunders (ex-Director of Amazon AI), and ask him directly. This was never done, which raises serious doubts about the credibility of the person who wrote it.

We need to stick to the facts, especially now that pseudo-journalists are flooding the web with fake news.

Let's keep this site reliable, please.

I read their site and blog, and they have a lot of screenshots of their internal apps.

Here's the dashboard where you choose an app: https://www.builder.ai/images/Choose.jpg

And here's the project progress dashboard showing how long it takes to build. In this case, it's 7 months. Clearly, there's no GenAI involved if it takes that long:

https://www.builder.ai/images/builder-studio-project-progres...

They also show in the menu things like "Releases" and "Meet the squad" (I'm assuming the devs). It can't be fake! One of them shows templates of well known sites. Based on what I read, you choose a template, Natasha or something else handles the assembly, which I assume is just a fancy way of saying it checks out a repo and installs dependencies. Then the Indian programmers do the rest. This is clearly explained on their website.

Take a look at their blog. There's plenty of information about their apps, which were reviewed by Microsoft before they invested 250 million.

Verdict: FAKE NEWS


Literally a news story where they spoke to the engineers.

Verdict: stop shilling.


The CEO and CFO committed fraud. That has been proven and there is no doubt about it.

But you are not going after them. You are accusing the Head of AI of lying. This is someone who spent seven years at Amazon, led the AI department, and is well known and respected in the community. You are also ignoring all the information posted on the company's blog, site, linkedin, and social media.

Do you really expect me, or anyone in the AI/ML space, to believe a blogger who doesn't know the difference between AI and GenAI? No chance. We do not take it lightly when a blogger tries to damage the reputation of someone who has earned the respect of the industry just for clicks or ad revenue.

I'm more interested in the truth than in ruining people's careers.


There are personal testimonials in the indiandevelopers subreddit from quite a while ago, if those are to be believed.


The news about BuilderAI using 700 devs instead of AI is false. Here's why.

I've seen a lot of posts coming out of India claiming "we were the AI". So I looked into it to see if Builder AI was lying, or if this was just a case of unpaid developers from India spreading rumours after the company went bust.

Here's what some of the devs are saying:

> "We were the AI. They hired 700 of us to build the apps"

Sounds shocking, but it doesn't hold up.

The problem is, BuilderAI never said development was done using AI. Quite the opposite. Their own website explains that a virtual assistant called "Natasha" assigns a human developer to your project. That developer then customises the code. They even use facial recognition to verify it's the same person doing the work.

> "Natasha recommends the best suited developer for your app project, who then customises your code on our virtual desktop. We also use facial recognition to check that the developer working on your code is the same one Natasha picked."

Source: https://www.builder.ai/how-it-works

I also checked the Wayback Machine. No changes were made to that site after the scandal. Which means: yes, those 700 developers were probably building apps, but no, they weren't "the AI". Because the company never claimed the apps were built by AI to begin with.

Verdict: FAKE NEWS


So "AI" in BuilderAI actually stand for "An Indian"?


Well, it's an India-based company. If you check LinkedIn, most of the employees are based in India.

The issue here is that many people think AI only means LLMs, Transformers, or GenAI. But AI has been around for decades and includes machine learning, deep learning, and neural networks.

So anyone using ML is free to register a .ai domain. There’s nothing wrong with that.

The problem would be if you told customers that your virtual assistant, in this case "Natasha", was creating the code instead of humans.

But that's not what happened here. The company went broke because it was reporting false sales figures.


I couldn't find any reference on the BuilderAI website claiming they use GenAI to build software. So the second claim lacks evidence.

Update: They mention AI to assemble features, not to generate code. So it's impossible to know whether they were actually using ML (traditional AI) to resolve dependencies and pull packages from a repo.


The following is from a link given by nikcub 12 days ago (https://news.ycombinator.com/item?id=44080640):

Engineer.ai says its “human-assisted AI” allows anyone to create a mobile app by clicking through a menu on its website. Users can then choose existing apps similar to their idea, such as Uber’s or Facebook’s. Then Engineer.ai creates the app largely automatically, it says, making the process cheaper and quicker than conventional app development.

“We’ve built software and an AI called Natasha that allows anyone to build custom software like ordering pizza,” Engineer.ai founder Sachin Dev Duggal said in an onstage interview in India last year. Since much of the code underpinning popular apps is similar, the company’s “human-assisted AI” can help assemble new ones automatically, he said. Roughly 82% of an app the company had recently developed “was built autonomously, in the first hour” by Engineer.ai’s technology, Mr. Duggal said at the time.

Documents reviewed by The Wall Street Journal and several people familiar with the company’s operations, including current and former staff, suggest Engineer.ai doesn’t use AI to assemble code for apps as it claims. They indicated that the company relies on human engineers in India and elsewhere to do most of that work, and that its AI claims are inflated even in light of the fake-it-till-you-make-it mentality common among tech startups.

Original link (by nikcub):

https://www.wsj.com/articles/ai-startup-boom-raises-question...

Arxiv:

https://archive.ph/R3nMZ

Note the article is from 2019. "Engineer.ai" is the same company as "Builder.ai".

To summarise, my reading of the article is that the founder of Bulder.ai (at the time "Engineer.ai") promoted the company's technology as mostly AI, assisted by a few humans; and that WSJ saw documents suggesting otherwise.

dang, why do you say the claim is "lurid"? Is it because of the racist undertones of "Indians not AI"? That's fair, there's severe racism against Indian coders in the West, but scams and fraud absolutely happen and it is inevitable to be disgusted when they are revealed. There has to be a more balance stance than dismissing all fraud claims as "lurid".


I called it lurid because it's sensational and yes, because of the implicit slur.

(Not the most precise use of the word lurid because it lacks the ghastly/uncanny quality - https://www.etymonline.com/search?q=lurid, but I couldn't think of a better one.)


Thanks, I thought that might be it (re "lurid"). How about "vulgar"? Ah, too archaic maybe?


Boring fraud VS Wild AI hype

- Proven: Builder.ai collapsed after fabricating revenue.

- Unsubstantiated: The claim that Indian coders disguised as AI is mostly hearsay, not backed by documents or insiders [1].

- Marketing vs reality: They marketed features as AI-assisted, not AI-generated code. Two completely different things.

- Bottom line: The real scandal is financial fraud, not a fake‑AI front.

---

[1] Source: https://www.linkedin.com/pulse/setting-record-straight-real-...

Yash Mittal, ex-Associate Director of Product @ Builder.ai

He describes a sophisticated AI-automated development pipeline, including:

- Requirement gathering via conversational AI

- Auto-generated features, user stories and test cases

- Graph neural nets building prototypes

- AI-based design and code generation

- Quality checks were handled by humans, not deception.

- Human developers never pretended to be AI.

---

Please, let's keep the information here reliable. We don't want this forum turning into another Reddit. Thanks to everyone who took the time to investigate and share evidence or real experiences.


Here we go again. We're amplifying accusations from pseudo blogs with little credibility, and spreading rumours in a forum where most of us are AI/ML engineers, researchers, founders, and university professors. We should know better.

I read their site and blog, and they have a lot of screenshots of their internal apps. It can't be fake! One of them shows templates of well known sites. Based on what I read, you choose a template, Natasha or something else handles the assembly, which I assume is just a fancy way of saying it checks out a repo and installs dependencies. Then the Indian programmers do the rest. This is clearly explained on their website.

Guys, take a look at their blog or website. There's plenty of information about their apps, which were reviewed by Microsoft before they invested 250 million.

Here's the dashboard where you choose an app: https://www.builder.ai/images/Choose.jpg

And here's the project progress dashboard showing how long it takes to build. In this case, it's 7 months. Clearly, there’s no GenAI involved if it takes that long:

https://www.builder.ai/images/builder-studio-project-progres...

Indian programmers and mathematicians are incredibly talented. In fact, an Indian programmer invented the first Transformer, which led to the rise of GenAI. The world chess champion is also from India. So let's stop mocking them. Companies like Google, Microsoft, Apple, Infosys and BuilderAI employ thousands of Indian programmers who are in the top one percent.

The founder of BuilderAI is also from India and was named Entrepreneur of the Year by Ernst & Young in 2024. He hired directors from Microsoft and Amazon. The Head of AI was a former AI Director at Amazon.

You need to stick to the facts. Their website looks legitimate and makes no mention of GenAI.

Verdict: FAKE NEWS

(Note: Indian devs are amazing)


To clarify, are you saying that the quotes the WSJ publshed, attributed to the founder, are "fake news"?


The founder was named UK Entrepreneur of the Year in 2024 by Ernst & Young, a company that also has a strong focus on AI research (some of my students work there). They do proper research before giving out an award.

Going back to your question:

You're quoting articles from 2019. Do you realise how quickly technology and startups evolve in a single year, let alone 6?

We can't assume nothing has changed since then, especially when there's plenty of evidence that the company grew 10x after the 2019 article was published.

Check on LinkedIn (if you have premium), the company went from 200 employees to 1000, and brought in directors from Microsoft and Amazon, people who are well known in the AI/ML space for their contributions to AI and virtual assistant development.

The biggest mistake the company made, without question, was the lack of transparency around sales figures. That's on the CEO and CFO, who committed fraud.

But now it feels like we're shifting the blame toward the AI engineers, the people who worked hard to build the internal tools the company was promoting. From what I've seen, those engineers built some great apps. If the CFO and the accountants were cooking the books, that's not their fault.

We should support the engineers and hold the founders and accountants responsible, instead of letting a blogger spread misinformation and claim that former directors from Amazon and Microsoft knew nothing about AI and faked the tech, which is clearly fake news.


>> You're quoting articles from 2019. Do you realise how quickly technology and startups evolve in a single year, let alone 6?

So that I don't misunderstand what you mean, do you mean that Builder.ai evolved into using less AI than what they were using in 2019?

That is, do you mean that in 2019 roughly 82% of an app the company had recently developed “was built autonomously, in the first hour” by Engineer.ai’s technology, according to the quote above, and during the next 6 years the company evolved so that less of their apps' code was built by AI?


Why would you come to that conclusion? Its really not that hard to understand.

They already had a library of templates, modules, components, and existing code, which they were likely reusing.

According to their website, their virtual assistant interacts with customers to understand requirements, recommends suitable templates, and assembles a basic version of the app automatically. Then assigns developers to that project, as shown on their site. In one of their blog posts, they mentioned using machine learning to generate dependency graphs, which was used to map out what needed to be built and estimate timelines.

Since 2019, they've expanded their AI and ML team and hired the former Director of AI from Amazon.

The issue here is that you're quoting articles published before GenAI or ChatGPT even existed. Back then, AI mostly referred to machine learning, it was a different landscape entirely.

I'm only interested in understanding why they lied about their finances. Because the tech they had was actually quite impressive.

We'll never know if they were reusing 40, 60, or 80 percent of the code. What we do know is that developers were spending 7 months (based on a screenshot from their website) writing code. So GenAI clearly wasn't used to generate the code. But to be fair, they never claimed it was.


So you mean that Builder.ai is using more AI now than they used in 2019? In other words, more than "roughly 82%" of apps the company now develops is built "autonomously, in the first hour by Engineer.ai’s technology"?

Note that generative AI and LLMS like GPT, Elmo and BERT existed earlier than 2019 and were the subject of much research (see "BERTology").


I'm not sure what you're referring to. I don't think you understand the difference between automation, traditional AI, and generative AI.

Back in 2019, when someone said part of an app was built "autonomously", it usually meant they reused components and generated the glue logic, configurations, or some custom code around it. In BuilderAI's case, they said they were using AI/ML to create dependency graphs. Just read their blog, it's all there!

The Transformer paper came out in 2017, and the first time I heard of GPT was in 2019. At that point, only a few companies like Google and OpenAI were working on LLMs. If you're expecting BuilderAi, a small startup from India with limited funding, to compete with multibillion dollar companies in 2019, then you're being delusional.

I don’t think I can be of any further help. Apologies.


I am a former employee from a few years ago. I didn't stay very long at all though.

In 2019 its former Chief Business Officer sued them for fraud, claiming that apps were built by Indian developers despite the claim that it was "80%" done by AI. There were detailed articles in the Wall Street Journal and The Verge at the time. I've found one reference saying it was settled out of court (the Telegraph), though I thought I'd previously read the case was dismissed.

When I was there 3 years later:

- the company had been renamed from Engineer.ai to Builder.ai

- the marketing materials still heavily pushed the claim that apps were 80% built by AI, curiously the exact same figure as it had been 3 years earlier

- there was a bunch of automation around small parts of the software development process. When customers went to create a new project there was an AI chatbot assistant (Natasha), which amongst other things asked the customer for their requirements, and created some estimates for how long things would take. There was also some automation for turning UI mockups into CSS styles and then merging the styles with templated React components. These various small bits of automation did have teams of real engineers working on it in the UK and the US. However, by and large it didn't really work. It seemed to me that the Indian outsourced programmers working on client projects totally ignored this technology anyway, and just went about their jobs as though it didn't exist. Despite being employed to work on this tech, I never had any interaction with the Indian developers building real client projects.

- a new team was spun up to use Generative AI to create frontend mock ups of an application from template components. This was integrated into the flow when potential clients were chatting with the Natasha AI chatbot. Some people worked genuinely hard on this project, and to some extent it did function. However, it didn't do much beyond the many other "create a frontend mockup from a single prompt" projects out there, other than having access to the company's internal React component templates. As far as I'm aware these frontend mockups were never used by the Indian developers who built the final projects.

In the tech world there is a very wide blurred zone between "outright fraud, which leads to a conviction in court" and "exagerrated claims". Given the extent to which Builder.ai committed literal fraud with their revenue figures and accounting, and given the hundreds of millions of dollars they raised on very limited real sales, I believe their claims about AI could plausibly also be literal fraud. However, the standards of a court case are much higher than my personal use of the word "fraud". I'm tempted to believe Musk is also bordering on fraud with e.g. the Boring Company, or his repeated claims about how close Tesla is to fully automated self-driving robo taxis. But given Tesla is building a genuine product with genuinely high sales and revenue, and given how much other crazy stuff Musk does that makes these exagerrated claims pale in comparison, it's not something that's ever going to lead to a court case.


Thanks for sharing this inside information. It's really helpful and gives me (and others researching this case) a better understanding of what was happening behind closed doors.

I have a couple of questions, if you don't mind:

1. I checked on LinkedIn and saw that most employees are based in India. Is that correct?

2. From what I understand, customers knew their apps were being built by developers, and they could use an internal tool called Studio to track progress and view the names of the developers assigned to tasks. Is that accurate?

3. Did you work with Craig Saunders, the Head of AI? I heard he was attending events and demoing some internal apps he had built, and that people were pretty impressed. Do you know what exactly he was showing?

4. The builder.ai website never claimed to use GenAI, only AI, and it clearly says that their virtual assistant, Natasha, assigns developers to projects. Have you ever seen Natasha doing this in action?

Thanks again for speaking up and helping clear up some of the confusion.


My inside information is both outdated and minimal, I was there for a short time a few years ago.

As far as I know all the development work on client projects was done by developers in India. I was too far removed from the details to know quite how they were employed and paid - I literally never had any interaction with them - but I suspect this is the main source of the LinkedIn employees.

As far as I know customers were aware apps were being built by human developers, the "80% by AI" was prominent on the website and the sign up process but it was never claimed the whole thing was automated.

It looks like Craig Saunders joined only a year ago, in June 2024, which is long after I left. Given how much AI has advanced over the last few years, and given his LinkedIn shows he previously held a senior AI role at Amazon, I wouldn't be remotely surprised if he and other employees built some impressive tech demos over the last year or two.

Regarding Natasha assigning developers to projects, I don't know to what extent it was really automated, I wasn't involved in that.


Thanks for clarifying that.

Here's the thing. The website doesn't say that 80% of the code was created using AI. That's what the fake article claims. What the site actually says is:

---

Natasha is your AI product manager.

Using machine learning algorithms, she recommends the features you need, based on the type of app you're building.

Natasha also creates an instant prototype for you, helping visualise your idea.

80% of this information is gathered automatically.

---

A few things to point out:

1. The site says it uses algorithms, not GenAI, to make recommendations and create a prototype. I assume that means some visual mockups or images of the suggested features.

2. It says it creates "80% of the information", not 80% of the code. That line is clearly referring to the recommendations, not the final software.

People spreading fake news scanned the site, misunderstood how AI works, and twisted the wording to fit their own narrative.


If you scroll down it also says 60% of the code is automated. Both percentages are essentially meaningless. If you require customers to type all their requirements into a static web based form does that also count as 80% automated information gathering? If 60% of an application is templated components does that mean code generation is 60% automated?

It's meaningless, and I suspect intentionally so. It clearly implies that they were doing things in a more advanced way than a standard Indian outsourcing firm, an implication heightened by hiring people to build impressive tech demos. But the impressive tech demos and the work on real client projects were two entirely separate things.


I agree, it's confusing. It looks like marketing was throwing numbers around without really explaining what they meant.

Automation is definitely not AI. Maybe the 60% referred to reusable code, or a theme, or a template that was handed off to the developers?

Either way, 60% feels like a random number.


Small correction: automation combined with machine learning is considered AI.

So the 60% was probably referring to the amount of code (templates, components, etc) they were reusing.


Speculating, don’t they offer dev services that’s supposed to be done by AI? If the dev services were offered by devs, then that would be the scam. Now that I’ve said the second part, it does seem lurid because who the hell is paying for AI first code deliverables.

—-

Message to HN:

Instead of founding yet another startup, please build the next Tech Vice News and fucking goto the far corners of the tech world like Shane Smith did with North Korea with a camera. I promise to be a founding subscriber at whatever price you got.

Things you’ll need:

1) Credentialed Ivy League grad. Make sure they are sporadic like that WeWork asshole.

2) Ex VC who exudes wealth with every footstep he/she takes

3) The camera

4) And as HBO Silicon Valley suggests, the exact same combination of white guy, Indian guy, Chinese guy to flesh out the rest of the team.

See, I need to know what’s it like working for a scrum master in Tencent for example during crunch time. Also, whatever the fuck goes on inside a DeFi company in executive meetings. And of course, find the next Builder.ai, or at least the Microsoft funding round discussions. We’ve yet to even get a camera inside those Arab money meetings where Sam Altman begs for a trillion dollars. We shouldn’t live without such journalism.


The short answer is no, their website doesn't claim that development is done using AI.

My gut feeling is that a lot of people, including developers, are posting hate messages and spreading fake news because of their fear of AI, which they see as a threat to their jobs.

If you look at their website, builder.ai, they tell customers that their virtual assistant, "Natasha", assigns a developer (I assume from India):

> Natasha recommends the best suited developer for your app project, who then customises your code on our virtual desktop. We also use facial recognition to check that the developer working on your code is the same one Natasha picked.

Source: https://www.builder.ai/how-it-works

They also have another page explaining how they use deep learning and transformers for speech-to-text processing. They list a bunch of libraries like MetaPath2Vec, Node2Vec, GraphSage, and Flair:

Source: https://www.builder.ai/under-the-hood

It sounds impressive, but listing libraries doesn't prove they built an actual LLM.

So, the questions that remain unanswered are:

1. Did Craig Saunders, the Head of AI at Builder.ai (and ex-Director of AI at Amazon), ever show investors or clients a working demo of Natasha, or a product roadmap? How do we know Natasha was actually an LLM and not just someone sitting in a call centre in India?

2. Was there a technical team behind Saunders capable of building such a model?

3. Was the goal really to build a domain-specific foundation model, or was that just a narrative to attract investment?

Having said that, the company went into insolvency because the CEO and CFO were misleading investors by significantly inflating sales figures through questionable financial practices. According to the Financial Times, BuilderAI reportedly engaged in "round-tripping" with VerSe Innovation. This raised red flags for investors, regulators and prosecutors, and led to bankruptcy proceedings




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: