Brain-computer interface (BCI) technologies are advancing rapidly, and – as with other innovations like artificial intelligence (AI) – regulation often lags, leaving the world to ‘play catch-up.’

In the US, a retroactive approach to regulatory policy has become standard practice for the Federal Trade Commission (FTC), ever since former FTC commissioner Maureen Ohlhausen popularised the concept of , a view that, in essence, defers regulatory intervention unless or until a market failure arises.

With President Trump back in the White House, the current administration’s disdain for regulation is clear. In January, a was issued, asserting that regulation has “massive costs on the lives of millions of Americans, creates a substantial restraint on our economic growth and ability to build and innovateâ€.

BCI technology represents an altogether new privacy threat given the technology’s generation of neural data and the overall ‘intimacy’ of this data. US Democratic party leader Chuck Schumer wants to get ahead of the curve and shore up privacy protections while the technology largely remains at the developmental stage.

In a to FTC chair Andrew Ferguson, Schumer pointed out the following: “Unlike other personal data, neural data … can reveal mental health conditions, emotional states, and cognitive patterns, even when anonymised. This information is not only deeply personal; it is also strategically sensitive.â€

Schumer urged the FTC to investigate whether companies developing or deploying BCI technologies were “engaging in deceptive or unfair practices†and requested that the watchdog outline how it intends to shore up existing privacy standards and clarify how these will apply to neural data.

GlobalData Strategic Intelligence

US Tariffs are shifting - will you react or anticipate?

Don’t let policy changes catch you off guard. Stay proactive with real-time data and expert analysis.

By GlobalData

The optimal strategy for managing neural data

According to Ravi de Silva, founder of regulatory compliance consultancy de Risk Partners, the world is entering a critical phase in emerging technology regulation, especially when it comes to the handling of neural data.

“Right now, most companies developing or investing in neurotech are operating in a space that has very little formal compliance infrastructure. That creates major blind spots for both regulators and businesses.

De Silva’s view is that companies cannot afford to wait for regulators to tell them what responsible data use looks like.

“They need to set internal standards early and stay ahead of potential risks. Neurotech companies should be embedding governance into product development from day one. That means clear policies around what data is being collected, how it’s stored and used, who has access, and what consent looks like.â€

The risks associated with neural data misuse

Although BCI technology hasn’t gone mainstream yet, the rapid advancement of AI suggests it could follow a similarly accelerated path of innovation.

Neuralink is in clinical trials and recently netted a further $650m investment in a Series E funding round, and once cleared, other companies will likely rise in the field, and a technology that sounds like the heady stuff of science fiction could be a regular part of many people’s lives within the next decade.

“One of the dangers of the BCI information generated by companies like Neuralink is that it can be commoditised, monetised, and marketed in a way that is to the detriment to the people whose well- being depends upon the technology,†says Antony K Haynes, partner, group head of the cybersecurity, data privacy & AI practice group at law firm Dorf Nelson & Zauderer.

Using Neuralink as an example, Haynes says that in the near future, many people may be reliant on such technology to lead normal lives, and this reliance could put them “at the mercy of the vendorâ€.

Haynes points out that a vendor could increase the cost of their service or make a user pay more to avoid advertisements from being embedded into their BCI chip.

“The concern here is about the essential commodification of an individual’s identity and their overall ability to function as a human being.

“To me, the value in [Schumer’s] letter to the FTC is to start flagging these issues. There may be a limit to what a regulatory agency can do, but Schumer is saying, do what you can do under the Federal Trade Commission Act.â€

The regulatory makeup of the US

The US does not have a nationwide stance on privacy policy; it has always been managed on a state-wide basis.

Kurt Osburn, a director at cybersecurity company NCC Group’s risk management and governance team states that when doing privacy reviews for clients, the company generally sticks to California’s California Consumer Privacy Act (CCPA) guidelines as they’re the most restrictive.

On the likelihood of a nationwide law being enacted any time soon, Osburn says: “We have a House of Representatives in the Senate that have to agree before it even gets to the White House for signature into law. Every year there’s something posted, and every year there’s something that runs through Congress, and every year it doesn’t make it to the President’s desk.â€

This challenge is likely to be exacerbated further by the Trump Administration’s stance on regulation.

“Any regulation for neural data is going to have to come from States. It’s also going to have to be driven by people that want to monitor, like Schumer and the Democrats, or people that do want that data to be protected,†says Osburn.

“I don’t see where the will to get it implemented as a law is going to come to from because it’s ultimately a ‘political ball.’â€

Haynes notes that the Biden administration wasn’t making moves around privacy protections regarding AI, a reasonable bellwether in forecasting how regulating neural data may unfold, as quickly as civil liberty and privacy advocates would have liked.

“But the Trump administration is moving in the opposite direction, trying to reduce regulation and restrictions on the ability of corporations to innovate and integrate new products.â€

Without any umbrella privacy regulation like the EU’s AI Act of General Data Protection Regulation (GDPR), Haynes says the reality in the US of a piecemeal, state-by-state approach to privacy regulation makes it “somewhat unstable and chaoticâ€. And Haynes points out that while some US states have begun to regulate AI, many corporations have made moves at a federal level to request states be prevented from regulating AI.

“There’s this idea that regulation will impede innovation and the ability to create new products and services, putting America at a competitive disadvantage compared to China and other countries,†Haynes explains.

The likelihood of regulatory action

Given the popularisation of regulatory humility at the FTC and the broad favour for less regulation from the Trump administration, Haynes is doubtful the FTC will act on the Schumer’s letter.

The FTC letter proposed a 30-day action window, with Haynes’ view borne out by the lack of action from the FTC within this timeframe.

“They have chosen not to act. I’m not surprised given that is consistent with the current sentiment,†he says.

“More attention should be given to protecting data privacy in general, and the data from BCIs. I just don’t think that the current agency leaders, the commissioners in the SEC, and the current composition the US House and Senate is supportive of that initiative,†he concludes.

In the near term, as long as the matter of regulation continues to be a ‘political football’ on both sides of the aisle, that regulation comes to pass for technology that is not yet widely available commercially, the likes of some 200,000 cochlea implants that use BCI technology in the US notwithstanding, for now, the regulation of neural data appears to sit firmly in the ‘should’ opposed to the ‘will’ column of regulatory inquiry.