Silicon Valley’s AI startups have raised a record $150 billion this year.
As the Financial Times reported on Sunday, December 28, this is part of an effort by these companies to safeguard themselves in case the surge in artificial intelligence (AI) investment slows down next year.
The report, which cites data from Pitchbook, shows that this year’s funding surpassed the previous record of $92 billion set in 2021. Companies like OpenAI and Anthropic have drawn significant interest from investors.
Venture capitalists and industry experts informed the FT that this funding will support growth and provide a buffer in case of an investment downturn due to concerns over high spending on AI infrastructure.
“You should make hay while the sun is shining,” said Lucas Swisher, a partner at Coatue who has invested in OpenAI, Databricks, and SpaceX. “2026 might bring something unexpected. When the market offers the chance, build a strong balance sheet.”
The FT noted that funding levels for 2025 are bolstered by some record rounds: $41 billion for OpenAI, $13 billion for Anthropic, and Meta’s $14 billion investment in data-labeling startup Scale AI.
The report adds that cost pressures have led to more frequent funding rounds, especially for companies developing “frontier” AI models that need massive computing power and expensive chips.
Sources close to OpenAI told the FT that the startup’s revenues for this year are about $13 billion. However, it is also losing billions of dollars each year as it builds its models, products, and infrastructure.
In other AI news, PYMNTS spoke on Monday with Adam Hiatt, vice president of fraud strategy at payments platform Spreedly, about how technology helps prevent fraud.
Although AI has rightfully gained attention for its role in making fraud more accessible, the report stated that fraudsters do not have exclusive access to artificial intelligence.
Nonetheless, the rise of AI and its use in popular and increasingly industrialized fraud schemes have shortened response times and raised the complexity of operations for humans, according to the report.
“Distinguishing between the good and the bad is becoming something that even thorough manual review can’t handle,” Hiatt said. “It used to be you could throw people at the issue, but that’s getting harder.”
Source: https://www.pymnts.com/
