Posted by msmash from Slashdot
From the moving-forward department: Anthropic launched Claude Opus 4 and Claude Sonnet 4 today, positioning Opus 4 as the world's leading coding model with 72.5% performance on SWE-bench and 43.2% on Terminal-bench. Both models feature hybrid architecture supporting near-instant responses and extended thinking modes for complex reasoning tasks.
The models introduce parallel tool execution and memory capabilities that allow Claude to extract and save key facts when given local file access. Claude Code, previously in research preview, is now generally available with new VS Code and JetBrains integrations that display edits directly in developers' files. GitHub integration enables Claude to respond to pull request feedback and fix CI errors through a new beta SDK.
Pricing remains consistent with previous generations at $15/$75 per million tokens for Opus 4 and $3/$15 for Sonnet 4. Both models are available through Claude's web interface, the Anthropic API, Amazon Bedrock, and Google Cloud's Vertex AI. Extended thinking capabilities are included in Pro, Max, Team, and Enterprise plans, with Sonnet 4 also available to free users.
The startup, which counts Amazon and Google among its investors, said Claude Opus 4 could autonomously work for nearly a full corporate workday -- seven hours. CNBC adds: "I do a lot of writing with Claude, and I think prior to Opus 4 and Sonnet 4, I was mostly using the models as a thinking partner, but still doing most of the writing myself," Mike Krieger, Anthropic's chief product officer, said in an interview. "And they've crossed this threshold where now most of my writing is actually ... Opus mostly, and it now is unrecognizable from my writing."
< This article continues on their website >
Posted by msmash from Slashdot
From the flywheel-effect department: Google's expansion of Gemini's data access through "personal context" represents a fundamental shift in how AI assistants operate. Unlike competitors that start from scratch with each new user, Gemini can immediately tap into years of accumulated user data across Google's ecosystem. The Verge adds: Google first started letting users opt in to its "Gemini with personalization" feature earlier this year, which lets the AI model tap into your search history "to provide responses that are uniquely insightful and directly address your needs." But now, Google is taking things a step further by unlocking access to even more of your information -- all in the name of providing you with more personalized, AI-generated responses.
During Google I/O on Tuesday, Google introduced something called "personal context," which will allow Gemini models to pull relevant information from across Google's apps, as long as it has your permission. One way Google is doing this is through Gmail's personalized smart replies -- the AI-generated messages that you can use to quickly reply to emails.
To make these AI responses sound "authentically like you," Gemini will pore over your previous emails and even your Google Drive files to craft a reply tailored to your conversation. The response will even incorporate your tone, the greeting you use the most, and even "favorite word choices," according to Google.