Posted by EditorDavid from Slashdot
From the breeding-success department: Interesting Engineering reports:
Astral Systems, a UK-based private commercial fusion company, has claimed to have become the first firm to successfully breed tritium, a vital fusion fuel, using its own operational fusion reactor. This achievement, made with the University of Bristol, addresses a significant hurdle in the development of fusion energy....
Scientists from Astral Systems and the University of Bristol produced and detected tritium in real-time from an experimental lithium breeder blanket within Astral's multi-state fusion reactors. "There's a global race to find new ways to develop more tritium than what exists in today's world — a huge barrier is bringing fusion energy to reality," said Talmon Firestone, CEO and co-founder of Astral Systems. "This collaboration with the University of Bristol marks a leap forward in the search for viable, greater-than-replacement tritium breeding technologies. Using our multi-state fusion technology, we are the first private fusion company to use our reactors as a neutron source to produce fusion fuel."
Astral Systems' approach uses its Multi-State Fusion (MSF) technology. The company states this will commercialize fusion power with better performance, efficiency, and lower costs than traditional reactors. Their reactor design, the result of 25 years of engineering and over 15 years of runtime, incorporates recent understandings of stellar physics. A core innovation is lattice confinement fusion (LCF), a concept first discovered by NASA in 2020. This allows Astral's reactor to achieve solid-state fuel densities 400 million times higher than those in plasma. The company's reactors are designed to induce two distinct fusion reactions simultaneously from a single power input, with fusion occurring in both plasma and a solid-state lattice.
< This article continues on their website >
Posted by EditorDavid from Slashdot
From the opening-AI department: "Microsoft has released the source code for the GitHub Copilot Chat extension for VS Code under the MIT license," reports BleepingComputer.
This provides the community access to the full implementation of the chat-based coding assistant, including the implementation of "agent mode," what contextual data is sent to large language models (LLMs), and the design of system prompts. The GitHub repository hosting the code also details telemetry collection mechanisms, addressing long-standing questions about data transparency in AI-assisted coding tools...
As the VS Code team explained previously, shifts in AI tooling landscape like the rapid growth of the open-source AI ecosystem and a more level playing field for all have reduced the need for secrecy around prompt engineering and UI design. At the same time, increased targeting of development tools by malicious actors has increased the need for crowdsourcing contributions to rapidly pinpoint problems and develop effective fixes. Essentially, openness is now considered superior from a security perspective.
"If you've been hesitant to adopt AI tools because you don't trust the black box behind them, this move opensources-github-copilot-chat-vscode/offers something rare these days: transparency," writes Slashdot reader BrianFagioli"
Now that the extension is open source, developers can audit how agent mode actually works. You can also dig into how it manages your data, customize its behavior, or build entirely new tools on top of it. This could be especially useful in enterprise environments where compliance and control are non negotiable.
< This article continues on their website >
Posted by EditorDavid from Slashdot
From the inflammaging department: "Some of our basic assumptions about the biological process of aging might be wrong," reports the New York Times — citing new research on a small Indigenous population in the Bolivian Amazon. [Alternate URL here.]
Scientists have long believed that long-term, low-grade inflammation — also known as "inflammaging" — is a universal hallmark of getting older. But this new data raises the question of whether inflammation is directly linked to aging at all, or if it's linked to a person's lifestyle or environment instead. The study, which was published Monday, found that people in two nonindustrialized areas experienced a different kind of inflammation throughout their lives than more urban people — likely tied to infections from bacteria, viruses and parasites rather than the precursors of chronic disease. Their inflammation also didn't appear to increase with age.
Scientists compared inflammation signals in existing data sets from four distinct populations in Italy, Singapore, Bolivia and Malaysia; because they didn't collect the blood samples directly, they couldn't make exact apples-to-apples comparisons. But if validated in larger studies, the findings could suggest that diet, lifestyle and environment influence inflammation more than aging itself, said Alan Cohen, an author of the paper and an associate professor of environmental health sciences at Columbia University. "Inflammaging may not be a direct product of aging, but rather a response to industrialized conditions," he said, adding that this was a warning to experts like him that they might be overestimating its pervasiveness globally.
< This article continues on their website >
Posted by EditorDavid from Slashdot
From the robo-rooter department: We're living in a new world now — one where it's an AI-powered penetration tester that "now tops an eminent US security industry leaderboard that ranks red teamers based on reputation." CSO Online reports:
On HackerOne, which connects organizations with ethical hackers to participate in their bug bounty programs, "Xbow" scored notably higher than 99 other hackers in identifying and reporting enterprise software vulnerabilities. It's a first in bug bounty history, according to the company that operates the eponymous bot...
Xbow is a fully autonomous AI-driven penetration tester (pentester) that requires no human input, but, its creators said, "operates much like a human pentester" that can scale rapidly and complete comprehensive penetration tests in just a few hours. According to its website, it passes 75% of web security benchmarks, accurately finding and exploiting vulnerabilities.
Xbow submitted nearly 1,060 vulnerabilities to HackerOne, including remote code execution, information disclosures, cache poisoning, SQL injection, XML external entities, path traversal, server-side request forgery (SSRF), cross-site scripting, and secret exposure. The company said it also identified a previously unknown vulnerability in Palo Alto's GlobalProtect VPN platform that impacted more than 2,000 hosts. Of the vulnerabilities Xbow submitted over the last 90 days, 54 were classified as critical, 242 as high and 524 as medium in severity. The company's bug bounty programs have resolved 130 vulnerabilities, and 303 are classified as triaged.
< This article continues on their website >
Posted by EditorDavid from Slashdot
From the network-effects department: This week Hewlett-Packard Enterprise settled its antitrust case with America's Justice Department, "paving the way for its acquisition of rival kit maker Juniper Networks," reported Telecoms.com:
Under the agreement, HPE has agreed to divest its Instant On unit, which sells a range of enterprise-grade Wi-Fi networking equipment for campus and branch deployments. It has also agreed to license Juniper's Mist AIOps source code — a software suite that enables AI-based network automation and management. HPE can live with that, since its primary motivation for buying Juniper is to improve its prospects in an IT networking market dominated by Cisco, where others like Arista and increasingly Nokia and Nvidia are also trying to make inroads.
And after receiving regulatory clearance, HPE "very quickly closed the deal..." reports The Motley Fool. "In the press release heralding the news, the buyer wrote that it "doubles the size of HPE's networking business and provides customers with a comprehensive portfolio of networking solutions."
Investors were obviously happy about this, as according to data compiled by S&P Global Market Intelligence the company's stock price ballooned by nearly 16% across the week, largely on the news.... The Justice Department had alleged, in a lawsuit filed in January, that an HPE/Juniper tie-up would essentially result in a duopoly in networking equipment. It claimed that a beefed-up HPE and networking incumbent Cisco would hold more than 70% combined of the domestic market.
Thanks to long-time Slashdot reader AmiMoJo for sharing the news.
Posted by EditorDavid from Slashdot
From the I-said-face department: Long-time Slashdot reader AmiMoJo shared this report from the Apple news blog 9to5Mac:
iOS 26 is a packed update for iPhone users thanks to the new Liquid Glass design and major updates for Messages, Wallet, CarPlay, and more. But another new feature was just discovered in the iOS 26 beta: FaceTime will now freeze your call's video and audio if someone starts undressing.
When Apple unveiled iOS 26 last month, it mentioned a variety of new family tools... "Communication Safety expands to intervene when nudity is detected in FaceTime video calls, and to blur out nudity in Shared Albums in Photos." However, at least in the iOS 26 beta, it seems that a similar feature may be in place for all users — adults included.
That's the claim of an X.com user named iDeviceHelp, who says FaceTime in iOS 26 swaps in a warning message that says "Audio and video are paused because you may be showing something sensitive," giving users a choice of ending the call or resuming it.
9to5Mac says "It's unclear whether this is an intended behavior, or just a bug in the beta that's applying the feature to adults... [E]verything happens on-device so Apple has no idea about the contents of your call."
Posted by EditorDavid from Slashdot
From the superuser-don't department: In April researchers responsibly disclosed two security flaws found in Sudo "that could enable local attackers to escalate their privileges to root on susceptible machines," reports The Hacker News. "The vulnerabilities have been addressed in Sudo version 1.9.17p1 released late last month."
Stratascale researcher Rich Mirch, who is credited with discovering and reporting the flaws, said CVE-2025-32462 has managed to slip through the cracks for over 12 years. It is rooted in the Sudo's "-h" (host) option that makes it possible to list a user's sudo privileges for a different host. The feature was enabled in September 2013. However, the identified bug made it possible to execute any command allowed by the remote host to be run on the local machine as well when running the Sudo command with the host option referencing an unrelated remote host. "This primarily affects sites that use a common sudoers file that is distributed to multiple machines," Sudo project maintainer Todd C. Miller said in an advisory. "Sites that use LDAP-based sudoers (including SSSD) are similarly impacted."
CVE-2025-32463, on the other hand, leverages Sudo's "-R" (chroot) option to run arbitrary commands as root, even if they are not listed in the sudoers file. It's also a critical-severity flaw. "The default Sudo configuration is vulnerable," Mirch said. "Although the vulnerability involves the Sudo chroot feature, it does not require any Sudo rules to be defined for the user. As a result, any local unprivileged user could potentially escalate privileges to root if a vulnerable version is installed...."
Miller said the chroot option will be removed completely from a future release of Sudo and that supporting a user-specified root directory is "error-prone."
Posted by msmash from Slashdot
From the how-about-that department: Software engineer Sean Goedecke argues that AI coding agents have already been commoditized because they require no special technical advantages, just better base models. He writes: All of a sudden, it's the year of AI coding agents. Claude released Claude Code, OpenAI released their Codex agent, GitHub released its own autonomous coding agent, and so on. I've done my fair share of writing about whether AI coding agents will replace developers, and in the meantime how best to use them in your work. Instead, I want to make what I think is now a pretty firm observation: AI coding agents have no secret sauce.
[...] The reason everyone's doing agents now is the same reason everyone's doing reinforcement learning now -- from one day to the next, the models got good enough. Claude Sonnet 3.7 is the clear frontrunner here. It's not the smartest model (in my opinion), but it is the most agentic: it can stick with a task and make good decisions over time better than other models with more raw brainpower. But other AI labs have more agentic models now as well. There is no moat.
There's also no moat to the actual agent code. It turns out that "put the model in a loop with a 'read file' and 'write file' tool" is good enough to do basically anything you want. I don't know for sure that the closed-source options operate like this, but it's an educated guess. In other words, the agent hackers in 2023 were correct, and the only reason they couldn't build Claude Code then was that they were too early to get to use the really good models.
Posted by msmash from Slashdot
From the tussle-continues department: Reuters: The European Union's landmark rules on AI will be rolled out according to the legal timeline in the legislation, the European Commission said on Friday, dismissing calls from some companies and countries for a pause.
Google owner Alphabet, Facebook owner Meta and other U.S. companies as well as European businesses such as Mistral and ASML have in recent days urged the Commission to delay the AI Act by years. Financial Times adds: In an open letter, seen by the Financial Times, the heads of 44 major firms on the continent called on European Commission President Ursula von der Leyen to introduce a two-year pause, warning that unclear and overlapping regulations are threatening the bloc's competitiveness in the global AI race.
[...] The current debate surrounds the drafting of a "code of practice," which will provide guidance to AI companies on how to implement the act that applies to powerful AI models such as Google's Gemini, Meta's Llama and OpenAI's GPT-4. Brussels has already delayed publishing the code, which was due in May, and is now expected to water down the rules.