Posted by EditorDavid from Slashdot
From the breaking-training department: Business Insider tells the story in three bullet points:
- Big Tech companies depend on content made by others to train their AI models.
- Some of those creators say using their work to train AI is copyright infringement.
- The U.S. Copyright Office just published a report that indicates it may agree.
The office released on Friday its latest in a series of reports exploring copyright laws and artificial intelligence. The report addresses whether the copyrighted content AI companies use to train their AI models qualifies under the fair use doctrine. AI companies are probably not going to like what they read...
AI execs argue they haven't violated copyright laws because the training falls under fair use. According to the U.S. Copyright Office's new report, however, it's not that simple. "Although it is not possible to prejudge the result in any particular case, precedent supports the following general observations," the office said. "Various uses of copyrighted works in AI training are likely to be transformative. The extent to which they are fair, however, will depend on what works were used, from what source, for what purpose, and with what controls on the outputs — all of which can affect the market."
The office made a distinction between AI models for research and commercial AI models. "When a model is deployed for purposes such as analysis or research — the types of uses that are critical to international competitiveness — the outputs are unlikely to substitute for expressive works used in training," the office said. "But making commercial use of vast troves of copyrighted works to produce expressive content that competes with them in existing markets, especially where this is accomplished through illegal access, goes beyond established fair use boundaries."
< This article continues on their website >
Posted by EditorDavid from Slashdot
From the here's-to-the-crazy-ones department: An anonymous reader shared this report from the Verge:
This morning, while summarizing an Apple "product blitz" he expects for 2027, Bloomberg's Mark Gurman writes in his Power On newsletter that Apple is planning a "mostly glass, curved iPhone" with no display cutouts for that year, which happens to be the iPhone's 20th anniversary... [T]he closest hints are probably in Apple patents revealed over the years, like one from 2019 that describes a phone encased in glass that "forms a continuous loop" around the device.
Apart from a changing iPhone, Gurman describes what sounds like a big year for Apple. He reiterates past reports that the first foldable iPhone should be out by 2027, and that the company's first smart glasses competitor to Meta Ray-Bans will be along that year. So will those rumored camera-equipped AirPods and Apple Watches, he says. Gurman also suggests that Apple's home robot — a tabletop robot that features "an AI assistant with its own personality" — will come in 2027...
Finally, Gurman writes that by 2027 Apple could finally ship an LLM-powered Siri and may have created new chips for its server-side AI processing.
Earlier this week Bloomberg reported that Apple is also "actively looking at" revamping the Safari web browser on its devices "to focus on AI-powered search engines." (Apple's senior VP of services "noted that searches on Safari dipped for the first time last month, which he attributed to people using AI.")
Posted by EditorDavid from Slashdot
From the saving-your-energy department: Fusion energy "took one step closer to reality," announced the University of Texas at Austin, as their researchers joined with a team from Los Alamos National Laboratory and Type One Energy Group and "solved a longstanding problem in the field" — how to contain high-energy particles inside fusion reactors.
When high-energy alpha particles leak from a reactor, that prevents the plasma from getting hot and dense enough to sustain the fusion reaction. To prevent them from leaking, engineers design elaborate magnetic confinement systems, but there are often holes in the magnetic field, and a tremendous amount of computational time is required to predict their locations and eliminate them. In their paper published in Physical Review Letters, the research team describes having discovered a shortcut that can help engineers design leak-proof magnetic confinement systems 10 times as fast as the gold standard method, without sacrificing accuracy... "What's most exciting is that we're solving something that's been an open problem for almost 70 years," said Josh Burby, assistant professor of physics at UT and first author of the paper. "It's a paradigm shift in how we design these reactors...."
This new method also can help with a similar but different problem in another popular magnetic fusion reactor design called a tokamak. In that design, there's a problem with runaway electrons — high-energy electrons that can punch a hole in the surrounding walls. This new method can help identify holes in the magnetic field where these electrons might leak.
Posted by EditorDavid from Slashdot
From the coding-unassistant department: Cybersecurity researchers have flagged three malicious npm packages that target the macOS version of AI-powered code-editing tool Cursor, reports The Hacker News:
"Disguised as developer tools offering 'the cheapest Cursor API,' these packages steal user credentials, fetch an encrypted payload from threat actor-controlled infrastructure, overwrite Cursor's main.js file, and disable auto-updates to maintain persistence," Socket researcher Kirill Boychenko said. All three packages continue to be available for download from the npm registry. "Aiide-cur" was first published on February 14, 2025...
In total, the three packages have been downloaded over 3,200 times to date.... The findings point to an emerging trend where threat actors are using rogue npm packages as a way to introduce malicious modifications to other legitimate libraries or software already installed on developer systems... "By operating inside a legitimate parent process — an IDE or shared library — the malicious logic inherits the application's trust, maintains persistence even after the offending package is removed, and automatically gains whatever privileges that software holds, from API tokens and signing keys to outbound network access," Socket told The Hacker News.
"This campaign highlights a growing supply chain threat, with threat actors increasingly using malicious patches to compromise trusted local software," Boychenko said.
The npm packages "restart the application so that the patched code takes effect," letting the threat actor "execute arbitrary code within the context of the platform."
Posted by EditorDavid from Slashdot
From the final-frontier department: 18 years ago Slashdot covered the creation of Spaceport America.
Today Space.com hails it as "the first purpose-built commercial spaceport in the world." But engineer/executive director Scott McLaughlin has plans to grow even more.
Already home to an array of commercial space industry tenants, such as Virgin Galactic, SpinLaunch, Up Aerospace, and Prismatic, Spaceport America is a "rocket-friendly environment of 6,000 square miles of restricted airspace, low population density, a 12,000-foot by 200-foot runway, vertical launch complexes, and about 340 days of sunshine and low humidity," the organization boasts on its website...
Space.com: What changes do you see that make Spaceport America even more viable today?
McLaughlin: I think opening ourselves up to doing different kinds of work. We're more like a civilian test range now. We've got high-altitude UAVs [Unmanned Aerial Vehicles]. We're willing to do engine production. We believe we're about to sign a data center, one that's able to provide services to our customers who want low-latency, artificial intelligence, or high-powered computing. You'll be able to rent some virtual machines and do your own thing and have it be instantaneous at the spaceport. So I think being more broadminded about what we can do at the spaceport is helping generate customers and revenue...
< This article continues on their website >
Posted by EditorDavid from Slashdot
From the big-Whoop department: Whoop fitness trackers had promised free upgrades to anyone who'd been a member for at least six months — and then reneged. "After customers began complaining, the company responded with a Reddit post, according to a report from TechCrunch:
Now, anyone with more than 12 months remaining on their subscription is eligible for a free upgrade to Whoop 5.0 (or a refund if they've already paid the fee). And customers with less than 12 months can extend their subscription to get the upgrade at no additional cost.
Whoop acknowledged that they'd previously said anyone who'd been a member for six months would receive a free upgrade. Friday they described that blog article as "incorrect". ("This was never our policy and should never have been posted... We removed that blog article... We're sorry for any confusion this may have caused.")
TechCrunch explains:
While the company said it's making these changes because it "heard your feedback," it also suggested that its apparent stinginess was tied to its transition from a [2021] model focused on monthly or six-month subscription plans to one where it only offers 12- and 24-month subscriptions...
There's been a mixed response to these changes on the Whoop subreddit, with one moderator describing it as a "win for the community." Other posters were more skeptical, with one writing, "You don't publish a policy by accident and keep it up for years. Removing it after backlash doesn't erase the fact [that] it is real."
Other changes announced by Whoop:
"If you purchased or renewed a WHOOP 4.0 membership in the last 30 days before May 8, your upgrade fee will be automatically waived at checkout..."
"If you've already upgraded to WHOOP 5.0 on Peak and paid a one-time upgrade fee despite having more than 12 months remaining, we'll refund that fee."
"Thank you for your feedback. We remain committed to delivering the best technology, experience, and value to our community."
Posted by EditorDavid from Slashdot
From the deal-or-no-deal department: OpenAI is currently in "a tough negotiation" with Microsoft, the Financial Times reports, citing "one person close to OpenAI."
On the road to building artificial general intelligence, OpenAI hopes to unlock new funding (and launch a future IPO), according to the article, which says both sides are at work "rewriting the terms of their multibillion-dollar partnership in a high-stakes negotiation...."
Microsoft, meanwhile, wants to protect its access to OpenAI's cutting-edge AI models...
[Microsoft] is a key holdout to the $260bn start-up's plans to undergo a corporate restructuring that moves the group further away from its roots as a non-profit with a mission to develop AI to "benefit humanity". A critical issue in the deliberations is how much equity in the restructured group Microsoft will receive in exchange for the more than $13bn it has invested in OpenAI to date.
According to multiple people with knowledge of the negotiations, the pair are also revising the terms of a wider contract, first drafted when Microsoft first invested $1bn into OpenAI in 2019. The contract currently runs to 2030 and covers what access Microsoft has to OpenAI's intellectual property such as models and products, as well as a revenue share from product sales. Three people with direct knowledge of the talks said Microsoft is offering to give up some of its equity stake in OpenAI's new for-profit business in exchange for accessing new technology developed beyond the 2030 cut off...
< This article continues on their website >
Posted by EditorDavid from Slashdot
From the thanks-for-the-memory department: "When Rust developers think of us C++ folks, they picture a cursed bloodline," writes professional game developer Mamadou Babaei (also a *nix enthusiast who contributes to the FreeBSD Ports collection). "To them, every line of C++ we write is like playing Russian Roulette — except all six chambers are loaded with undefined behavior."
But you know what? We don't need a compiler nanny. No borrow checker. No lifetimes. No ownership models. No black magic. Not even Valgrind is required. Just raw pointers, raw determination, and a bit of questionable sanity.
He's created a video on "how to hunt down memory leaks like you were born with a pointer in one hand and a debugger in the other." (It involves using a memory leak tracker — specifically, Visual Studio's _CrtDumpMemoryLeaks, which according to its documentation "dumps all the memory blocks in the debug heap when a memory leak has occurred," identifying the offending lines and pointers.)
"If that sounds unreasonably dangerous — and incredibly fun... let's dive into the deep end of the heap."
"The method is so easy, it renders Rust's memory model (lifetimes, ownership) and the borrow checker useless!" writes Slashdot reader NuLL3rr0r. Does anybody agree with him? Share your own experiences and reactions in the comments.
And how do you feel about Rust's "borrow-checking compiler nanny"?
Posted by EditorDavid from Slashdot
From the hole-in-the-NetWeaver department: "A China-linked unnamed threat actor dubbed Chaya_004 has been observed exploiting a recently disclosed security flaw in SAP NetWeaver," reports The Hacker News:
Forescout Vedere Labs, in a report published Thursday, said it uncovered a malicious infrastructure likely associated with the hacking group weaponizing CVE-2025-31324 (CVSS score: 10.0) since April 29, 2025. CVE-2025-31324 refers to a critical SAP NetWeaver flaw that allows attackers to achieve remote code execution (RCE) by uploading web shells through a susceptible "/developmentserver/metadatauploader" endpoint.
The vulnerability was first flagged by ReliaQuest late last month when it found the shortcoming being abused in real-world attacks by unknown threat actors to drop web shells and the Brute Ratel C4 post-exploitation framework. According to [SAP cybersecurity firm] Onapsis, hundreds of SAP systems globally have fallen victim to attacks spanning industries and geographies, including energy and utilities, manufacturing, media and entertainment, oil and gas, pharmaceuticals, retail, and government organizations. Onapsis said it observed reconnaissance activity that involved "testing with specific payloads against this vulnerability" against its honeypots as far back as January 20, 2025. Successful compromises in deploying web shells were observed between March 14 and March 31.
"In recent days, multiple threat actors are said to have jumped aboard the exploitation bandwagon to opportunistically target vulnerable systems to deploy web shells and even mine cryptocurrency..."
Thanks to Slashdot reader bleedingobvious for sharing the news.
Posted by EditorDavid from Slashdot
From the shape-of-things-to-come department: Fast Company's "AI Decoded" newsletter makes the case that the first "killer app" for generative AI... is coding.
Tools like Cursor and Windsurf can now complete software projects with minimal input or oversight from human engineers... Naveen Rao, chief AI officer at Databricks, estimates that coding accounts for half of all large language model usage today. A 2024 GitHub survey found that over 97% of developers have used AI coding tools at work, with 30% to 40% of organizations actively encouraging their adoption.... Microsoft CEO Satya Nadella recently said AI now writes up to 30% of the company's code. Google CEO Sundar Pichai echoed that sentiment, noting more than 30% of new code at Google is AI-generated.
The soaring valuations of AI coding startups underscore the momentum. Anysphere's Cursor just raised $900 million at a $9 billion valuation — up from $2.5 billion earlier this year. Meanwhile, OpenAI acquired Windsurf (formerly Codeium) for $3 billion. And the tools are improving fast. OpenAI's chief product officer, Kevin Weil, explained in a recent interview that just five months ago, the company's best model ranked around one-millionth on a well-known benchmark for competitive coders — not great, but still in the top two or three percentile. Today, OpenAI's top model, o3, ranks as the 175th best competitive coder in the world on that same test. The rapid leap in performance suggests an AI coding assistant could soon claim the number-one spot. "Forever after that point computers will be better than humans at writing code," he said...
< This article continues on their website >
Posted by EditorDavid from Slashdot
From the Model-Context-Protocol department: Exposure-management company Tenable recently discussed how the MCP tool-interfacing framework for AI can be "manipulated for good, such as logging tool usage and filtering unauthorized commands." (Although "Some of these techniques could be used to advance both positive and negative goals.")
Now an anonymous Slashdot reader writes: In a demonstration video put together by security researcher Seth Fogie, an AI client given a simple prompt to 'Scan and exploit' a web server leverages various connected tools via MCP (nmap, ffuf, nuclei, waybackurls, sqlmap, burp) to find and exploit discovered vulnerabilities without any additional user interaction
As Tenable illustrates in their MCP FAQ, "The emergence of Model Context Protocol for AI is gaining significant interest due to its standardization of connecting external data sources to large language models (LLMs). While these updates are good news for AI developers, they raise some security concerns." With over 12,000 MCP servers and counting, what does this all lead to and when will AI be connected enough for a malicious prompt to cause serious impact?
Posted by EditorDavid from Slashdot
From the not-playing-well-together department: Slashdot reader BrianFagioli writes:
The new Nintendo Switch 2 is almost here. Next month, eager fans will finally be able to get their hands on the highly anticipated follow-up to the wildly popular hybrid console. But before you line up (or frantically refresh your browser for a preorder), you might want to read the fine print, because Nintendo might be able to kill your console.
Yes, really. That's not just speculation, folks. According to its newly updated user agreement, Nintendo has granted itself the right to make your Switch 2 "permanently unusable" if you break certain rules. Yes, the company might literally brick your device. Buried in the legalese is a clause that says if you try to bypass system protections, modify software, or mess with the console in a way that's not approved, Nintendo can take action. And that action could include completely disabling your system.
The exact wording makes it crystal clear: Nintendo may "render the Nintendo Account Services and/or the applicable Nintendo device permanently unusable in whole or in part...." [T]o be fair, this is probably targeted at people who reverse engineer the system or install unauthorized software — think piracy, modding, cheating, and the like. But the broad and vague nature of the language leaves a lot of room for interpretation. Who decides what qualifies as "unauthorized use"? Nintendo does.
Nintendo's verbiage says users must agree "without limitation" not to...
Publish, copy, modify, reverse engineer, lease, rent, decompile, disassemble, distribute, offer for sale, or create derivative works
Obtain, install or use any unauthorized copies of Nintendo Account Services
Exploit the Nintendo Account Services in any manner other than to use them in accordance with the applicable documentation and intended use [unless "otherwise expressly permitted by applicable law."]
< This article continues on their website >