The purpose of three former Google X scientists is to give you a second brain-not in-Vigyan-Fi or Chip-in-Hed Sense-but through an AI-managed app that receives references by listening to everything you say in the background. Their startup, TwinmindSeed funding has raised $ 5.7 million and released an Android version with a new AI speech model. It also has an iPhone version.
In March 2024, Daniel George (CEO) and his former Google X colleagues Sunny Tang and Mahi Karim (both CTO) co-established, run in the twinMind background, capture the ambient speech (with the permission of the user) to create a personal knowledge graph.
By converting spoken ideas, meetings, lectures and conversations into structured memory, app can generate AI-operated notes, two-dos and answer. It works offline, processes the audio in real time to transfer on-devices, and can capture the audio continuously for 16–17 hours without drawing the device’s battery, says the founder. The app can also backup user data, so the conversation can be recovered when the device is lost, although users can opt-out of the same. It also supports real -time translation in more than 100 languages.
Twinmind separates itself from AI meeting notes such as Otar, Granola, and Fireflies, by capturing the audio in the background throughout the day. To make this possible, the team created a low-level service in the net swift that runs on the iPhone originally. In contrast, many competitive reacts use domestic and rely on cloud-based processing, which Apple has banned from walking in the background for an extended period, George said in a special interview.
He said, “We spent about six to seven months last year, to complete this audio capture continuously and to find a lot of hacks around the garden with Apple walls,” he said.
George left Google X in 2020 and received ideas for TwinMind in 2023 when working as vice-presidents in JPMorgan and implemented AID AID, participating in back-to-back meetings each day. To save time, he built a script, who captured the audio, transferred it to his iPad, and fed it to the slap – which began to understand their projects and even generated a codeable code. Impressed by the results, he shared it with friends and posted it on blind, where other people showed interest, but did not want them to have something on their work laptops. This inspired him to create an app that could walk on an individual phone, listening quietly during meetings to gather useful references.
In addition to Mobile applicationTwinmind provides one Chrome expansion It collects additional references through browser activity. Using vision AI, it can visually scan the open tab and explain the material from various platforms, including email, slack and perception.
Techcrunch event
San francisco
,
27-29 October, 2025
Startups also used extension to shortlist the intern from more than 850 applications received in this summer.
“We opened all the linkedIn profiles and CVs of 854 applicants in the browser tab, then asked the Chrome Extension to rank the best candidates,” George said. “It did a great job – similarly we hired our last four intern.”

He said that the current AI chatbots-including OpenIAI chat and anthropic clouds-cannot process hundreds of documents from the Assani or to gather parcell information to gather parcell information to sain-ups from devices such as LinkedIn or Gmail. Similarly, the AI-powered browser such as perplexity and browser company lacks knowledge from your offline conversations and in-tradition meetings.
Startup currently has more than 30,000 users, about 15,000 active every month. George said that 20–30% twinmind users also use chrome extensions.
While the US is the largest base for twinMind so far, startup is also watching traction from India, Brazil, Philippines, Ethiopia, Kenya and Europe.
The twinmind targets general audience, although 50–60% of the users are currently professional, about 25% of students, and the remaining 20–25% are individuals using it for individual purposes.
George told Tekkrench that his father is one of the people using Twinmind to write his autobiography.
One of the important shortcomings of AI is the ability to compromise user privacy. But George stated that twinMind does not train his model on user data and is designed to work without sending recording on the cloud. Unlike many other AI notes taking apps, twinminds do not allow users to reach audio recording later-the audio is removed on the fly-while only transcharged text is stored in the app locally, he said.
Google X experience helped to speed up things
TwinMind co-founders spent a few years working on various projects in Google X. George told Techcrunch that he worked on six projects alone, including Iyo The team behind the AI-in-operated earbuds, recently made headlines for sueing Openai and Johnny Ewe. That experience helped the twinMind team transfer to the product quickly to the product.
“Google X was really the right place to prepare to start its own company,” George said. “Projects like about 30 to 40 startups are happening at any time. No one gets to work before working in six or three years of startups in two or three years-not at least in such a short time.”

Before joining Google, George worked on a deep learning for gravitational wave astrophysics as part of the Widely Ligo Group at Illinois’ National Center for Supercompacting Application. He completed his PhD in AI for astrophysics in just one year – at the age of 24 – an achievement that included him as a deep education and AI researcher in Stephen Wolfram’s Research Lab in 2017.
This initial relationship with Wolfram came to full cycle after years-they finished writing the first check for Twinmind, marking their first investment in a startup. The recent seed round was led by well -organized enterprises, with the participation of Sikoia Capital and other investors including Volfram. Round Value TwinMind in later Money of $ 60 million.
TwinMind Year -3 model
In addition to its apps and browser extensions, TwinMind has also introduced the successor of TwinMind Year -3 model, its existing EAR -2, which supports more than 140 languages worldwide and a word error rate of 5.26%, said the startup. The new model can also identify various speakers in a conversation and has a speaker diarison error rate of 3.8%.
The new AI model is a good-tune mixture of many open-source models, trained on a curate set of human-anotted internet data, including podcasts, videos and movies.
“We found that the more languages you support, the better the model gets to understand accents and regional dialects because it is training on a wide range of speakers,” George said.
The model costs $ 0.23/ hr and Will be available through an API For developers and enterprises over the next few weeks.

Ear-2, unlike Ear-2, does not support a full offline experience, as it is large in size and runs on the cloud. However, the app automatically switchs to EAR -2 if the internet goes and then returns to Ear -3 when returned, George said.
With the onset of Year -3, Twinmind now provides a Pro subscription in $ 15/month, with a large reference window of 2 million tokens and email support within 24 hours. However, the free version still exists with all current features including unlimited hours of transcription and on-device speech recognition.
The startup currently has a team of 11 members. It is planning to hire some designers to enhance its user experience and set up a business development team to sell your API. In addition, there are plans to spend some money on receiving new users.

