
A new iPhone app that pays users for their call recordings to train AI systems rose rapidly in late September. However, it then went offline after a security flaw exposed user data.
What Neon Is And Who Is Behind It?
Neon is a consumer app that pays users to record their phone calls and sells the anonymised data to artificial intelligence companies for use in training machine learning models. Marketed as a way to “cash in” on phone data, it positions itself as a fairer alternative to tech firms that profit from user data without compensation. The app is operated by Neon Mobile, Inc., whose New York-based founder, Alex Kiam, is a former data broker who previously helped sell training data to AI developers.
Only Just Launched
The app launched in the United States this month (September 2025). According to app analytics tracking, Neon entered the U.S. App Store charts on 18 September, ranking 476th in the Social Networking category. Amazingly, by 25 September, it had climbed to the No. 2 spot, and reached the top 10 overall ! On its peak day, it was downloaded more than 75,000 times. No official launch has yet taken place in the UK.
How Does The App Work?
Neon allows users to place phone calls using its in-app dialler, which routes audio through its servers. Calls made to other Neon users are recorded on both sides, while calls to non-users are recorded on one side only. Transcripts and recordings are then anonymised, with personal details such as names and phone numbers removed, before being sold to third parties. Neon says these include AI firms building voice assistants, transcription systems, and speech recognition tools.
Users are then paid in cash for the calls, credited to a linked account. The earnings model actually promises up to $30 per day, with 30 cents per minute for calls to other Neon users and lower rates for calls to non-users. Referral bonuses are also offered. While consumer data is routinely collected by many apps, Neon stands out because it offers direct financial incentives for the collection of real human speech, a form of data that is more intimate and sensitive than most.
The Legal Language Behind The Data Deal
Neon’s terms of service give the company an unusually broad licence to use and resell recordings. This includes a worldwide, irrevocable, exclusive right to reproduce, host, modify, distribute, and create derivative works from user submissions. The licence is royalty-free, transferable, and allows for sublicensing through multiple tiers. Neon also claims full ownership of outputs created from user data, such as training models or audio derivatives. For most users, this means permanently giving up control over how their voice data may be reused, sold, or processed in future.
Why The App Took Off So Quickly
Neon’s rapid growth appears to have been driven by a combination of curiosity, novelty, and, of course, cash and referral-led incentives. Many users were drawn in by the promise of payment for something they do every day anyway, i.e., talking on the phone. The idea of monetising phone calls is also likely to have appealed particularly to users who are increasingly aware that their data is being collected and sold elsewhere.
Social media posts promoting referral links and earnings screenshots also seem to have really helped fuel viral growth. At the same time, widespread interest in AI tools has normalised the idea of systems that listen, learn, and improve through exposure to large datasets.
What Went Wrong?
Unfortunately, it seems that shortly after Neon became one of the most downloaded apps in the U.S., independent analysis revealed a serious security flaw. The app’s backend was found to be exposing not only user recordings and transcripts but also associated metadata. This included phone numbers, call durations, timestamps, and payment amounts. Audio files could be accessed via direct URLs without authentication, creating a significant privacy risk for anyone whose voice was captured.
Neon’s response was to take the servers offline temporarily. In an email to users, the company said it was “adding extra layers of security” to protect data. However, the email did not mention the specific details of the exposure or what user information had been compromised. The app itself remained listed in the App Store, but was no longer functional due to the server shutdown.
Legal And Ethical Concerns Around Recording
Neon’s approach raises a number of legal questions, particularly around consent and data protection. For example, in the United States, phone call recording laws differ by state. Some states require consent from all participants, while others allow one-party consent. By only recording one side of a call when the other participant is not a Neon user, the company appears to be trying to avoid falling foul of two-party consent laws. However, experts have questioned whether this distinction is sufficient, especially when metadata and transcript content may still reveal personal information about the other party.
In the UK, where GDPR rules apply, the bar for lawful processing of voice data is much higher. Call recordings here are considered personal data, and companies must have a lawful basis to record and process them. This could be consent, contractual necessity, legal obligation, or legitimate interest. In practice, UK organisations must be transparent, inform all parties at the start of a call, and apply strict safeguards around storage, retention, and third-party sharing. If the recording includes special category data, such as health or political views, the legal threshold is even higher.
Why The Terms May Create Future Risk
The app’s terms of service not only cover the use of call data for AI training, but also grant Neon the right to redistribute or modify that data without further input from the user. That includes the right to create and sell synthetic voice products based on recordings, or to allow third-party developers to embed user speech in new datasets. This means that, once the data is sold, users have no real practical way of tracking where it ends up, who uses it, or for what purpose. That includes the potential for misuse in deepfake technologies or other forms of AI-generated impersonation.
Trust Issue For Neon?
The exposure of call data so early in the app’s lifecycle does seem to have caused (not surprisingly) a major trust issue. While the company has said it is fixing the security problem, it will now be subject to much higher scrutiny from app platforms, data buyers, and regulators. If Neon wants to relaunch, it may need to undergo independent security audits, publish full transparency reports, and add explicit call recording notifications and consent features. Commercially, the setback may impact deals with AI firms if those companies decide to distance themselves from controversial datasets.
What About The AI Companies Using Voice Data?
For companies developing speech models, the incident highlights the importance of knowing exactly how training data has been sourced. For example, buyers of voice datasets will now need to ask more detailed questions about licensing, user consent, jurisdiction, and security. Any material flaw in the source of data can invalidate models downstream, especially if it leads to legal challenges or regulatory action. Data provenance and ethical sourcing are likely to become higher priorities in due diligence processes for commercial AI development.
Issues For Users
While Neon claims to anonymise data, voice recordings generally carry an inherent risk. For example, voice is increasingly used as a biometric identifier, and recorded speech can be used to train systems that replicate tone, mannerisms, and emotional expression. For individuals, this could lead to impersonation or fraud. For businesses, there is a separate concern. If employees use Neon to record work calls, they may be exposing client conversations, proprietary information, or regulated data without authorisation. This could result in GDPR breaches, disciplinary action, or reputational harm. Companies should review their mobile and communications policies and block unvetted recording apps from use on managed devices.
Regulators And App Platforms
The rise and fall of Neon within a matter of days really shows how quickly new data models can go from idea to mass adoption. Platforms such as the App Store are now likely to face more pressure to assess the privacy implications of data-for-cash apps before they are allowed to scale. Referral schemes that incentivise covert recording or encourage over-sharing are likely to be reviewed more closely. Regulators may also revisit guidance on audio data, especially where recordings are repackaged and resold to machine learning companies. Voice data governance, licensing standards, and ethical AI sourcing are likely to become more prominent areas of focus in the months ahead.
Evaluating Tools Like Neon
For organisations operating in the UK, the launch of Neon should serve as a prompt to tighten call recording policies and educate staff on data risk. If a similar service becomes available locally, any use would need a clear lawful basis, robust security controls, and transparency for all parties involved. This includes notifying people before recording begins, limiting the types of calls that can be recorded, and putting strict controls on where that data is sent. In regulated industries, the use of external apps to record voice data could also breach sector-specific rules or codes of conduct. A risk assessment and DPIA would be required in most business contexts.
What Does This Mean For Your Business?
The Neon episode shows just how fast the appetite for AI training data is reshaping the boundaries of consumer tech. In theory, Neon offered a way for users to reclaim some value from a data economy that usually runs without them. In practice, it seems to have revealed how fragile the balance is between innovation and responsibility. When that data includes private conversations, even anonymised, the margin for error is narrow. Voice is not like search history or location data because it’s personal, expressive, and hard to replace if misused.
What happened with Neon also appears to show how little control users have once they opt in. For example, the terms of service handed the company almost total freedom to store, repackage, and resell recordings and outputs, with no practical ability for users to track where their voice ends up. Even if users are comfortable making that trade, the people they speak to may not be. From an ethical standpoint, recording conversations for profit, especially with people unaware they are being recorded, raises serious questions about consent and accountability.
For UK businesses, the risks are not just theoretical. If employees start using similar apps to generate income, they could unintentionally upload sensitive or regulated information to unknown third parties. That creates exposure under GDPR, commercial contracts, and sector-specific codes, and may breach client trust. Businesses will need to move quickly to block such apps on company devices and reinforce clear internal rules around recording, call handling, and use of AI data services.
For AI companies, the lesson is equally clear. The hunger for diverse, real-world training data must be matched with rigorous scrutiny of how that data is sourced. Datasets obtained through poorly controlled consumer schemes are more likely to carry risk, not only in terms of legality but also model quality and future auditability. Voice data is especially sensitive, and provenance will now need to be a standard consideration in every procurement and development process.
More broadly, Neon’s brief rise exposes the gap between platform rules, regulatory oversight, and the speed of public adoption. App marketplaces now face growing pressure to vet data-collection models more stringently, particularly those that monetise content recorded from other people. It also raises a wider challenge: how to build the AI systems people want without normalising tools that trade in privacy. As interest in AI grows, the burden of building that future responsibly will only increase for every stakeholder involved.