Quote:The Delaware judge who once ordered Elon Musk’s pay package be revoked is stepping aside from several ongoing cases against him over allegations of bias.
Court of Chancery Chancellor Kathaleen St. J. McCormick announced the stunning move Monday after Musk’s lawyers accused her of having it in for the billionaire, pointing to a LinkedIn post that appeared to show her “supporting” commentary mocking him.
Musk’s nemesis said in a filing that she will reassign the group of suits to different judges but insisted she was not in fact biased against the high-profile defendant.
“The motion for recusal rests on a false premise — that I support a LinkedIn post about Mr. Musk, which I do not in fact support,” she wrote. “I am not biased against the defendants in these actions.
“But the motion for reassignment is granted,” McCormick continued. “As should be obvious, disproportionate media attention surrounding a judge’s handling of an action is detrimental to the administration of justice.
“Fortunately, the Court of Chancery is far greater than any one person.”
She said the cases would be taken over by three colleagues in Delaware’s Court of Chancery — the nation’s premier venue for corporate litigation, where judges routinely decide high-stakes disputes involving fiduciary duties and board governance for companies incorporated in the state.
Last week, lawyers for Musk demanded McCormick recuse herself because she pressed a button indicating she “supported” a post mocking Musk for being found liable for tweets he posted in 2022 about his $44 billion Twitter deal. LinkedIn’s “support” feature is similar to “liking” a post on it and other social media platforms.
Musk’s attorneys said the judge’s alleged social media activity created an unavoidable appearance of bias under Delaware law, which requires recusal where there is “any reasonable basis to question the impartiality of the trial judge.”
“I either did not click the ‘support’ icon at all, or I did so accidentally. I do not believe that I did it accidentally,” the jurist replied last week.
The litigation before McCormick involved consolidated shareholder derivative lawsuits accusing Musk and Tesla’s board of breaching fiduciary duties, including claims tied to executive compensation and broader corporate governance issues.
One of the central cases, brought by a Detroit pension fund, challenges how Tesla’s directors awarded themselves stock-based compensation, alleging the company was harmed by excessive pay and weak oversight.
The lawsuits have been combined with related claims, some of which involve Musk’s conduct surrounding the 2022 Twitter deal, creating overlap with issues raised in the recent federal case in California.
McCormick has been at the center of multiple headline-grabbing cases involving Musk, including the 2022 lawsuit that sparked him to complete his $44 billion acquisition of X, then known as Twitter, after he’d attempted to walk away from the deal.
Quote:An investigation by Fairlinked e.V., a group representing commercial LinkedIn users, reveals that the popular business-focused social platform has been secretly collecting sensitive user data, potentially affecting 405 million people.
According to the report, LinkedIn deploys code on its website that scans users’ browsers for installed software, including browser extensions.
The code checks for thousands of specific extensions using their unique identifiers, compiles the findings, encrypts the data, and sends it to LinkedIn’s servers. According to the report, LinkedIn shares this data with third-party companies, including an American-Israeli cybersecurity firm, HUMAN Security.
All data extraction occurs silently in the background without explicit user consent and is not disclosed in LinkedIn’s public privacy policy.
That is stirring privacy controversy, because LinkedIn accounts reveal real identities, including users’ names, employers, and job titles, and any collected data could be linked with identifiable individuals.
The claims were published as part of the group’s “BrowserGate” campaign. The investigator group calls it one of the “largest corporate espionage and data breach scandals in digital history.”
What data is being harvested when you use LinkedIn?
Some of the browser extensions identified in the scan may indicate sensitive personal information, including religious beliefs, political views, health conditions, or whether a user is actively seeking employment.
According to a report, Microsoft injects malicious JavaScript into the LinkedIn website and searches each user’s browser for installed software applications. In total, there were over 6000 extensions that Linkedin scan for.
“LinkedIn scans for extensions that identify practicing Muslims, extensions that reveal political orientation, extensions built for neurodivergent users, and 509 job search tools that expose who is secretly looking for work on the very platform where their current employer can see their profile,” the group said.
Under the European Union’s General Data Protection Regulation (GDPR), processing such categories of data typically requires explicit user consent. Fairlinked alleges that LinkedIn does not obtain this consent or disclose the practice.
LinkedIn is also reported to detect a wide range of competing software tools, including major platforms like Salesforce, HubSpot, and Pipedrive, potentially allowing it to map which companies rely on which services.
In total, the scan is said to cover more than 200 competing products, including tools such as Apollo, Lusha, and ZoomInfo.
"We use this data to determine which extensions violate our terms, to inform and improve our technical defenses, and to understand why a member account might be fetching an inordinate amount of other members' data, which, at scale, impacts site stability. We do not use this data to infer sensitive information about members,"
Quote:Americans’ personal data could be collected and stored overseas — even if they’ve never downloaded a foreign-developed app themselves — according to a new FBI alert warning about the risks tied to popular mobile platforms.
That means information like a person’s name, email address or phone number could be pulled from someone else’s contact list and potentially stored abroad if a friend or family member grants an app access to their device.
The warning comes after years of scrutiny over TikTok’s ties to China, but the FBI alert suggests the concerns extend beyond any single platform to a broader range of foreign-developed apps.
In a public service announcement, the FBI said many widely used apps developed overseas, particularly those tied to China, may access extensive data once permissions are granted, including address books containing information on both users and non-users.
The bureau also warned that some apps may continue collecting data in the background after access is granted and, in certain cases, store that information on servers in countries where local laws could allow government access.
“Developer companies can store collected data on users’ private information and address books, such as names, e-mail addresses, user IDs, physical addresses, and phone numbers of their stored contacts,” the FBI said. “The app can persistently collect data and users’ private information throughout the device, not just within the app or while the app is active.”
The FBI did not name specific companies, but the warning could apply to a range of widely used apps developed by Chinese firms — including video-editing platform CapCut, shopping apps like Temu and SHEIN, and social media platforms such as Lemon8 — several of which rank among the most downloaded apps in the United States.
U.S. officials have long warned that data collected by Chinese-linked platforms could be used to build detailed profiles of Americans, map personal and professional networks, and potentially support intelligence-gathering efforts, particularly if accessed under China’s national security laws.
The FBI added that apps operating in China are subject to the country’s national security laws, which could allow the government to access user data.
The FBI also pointed to possible warning signs that an app may be collecting more data than expected, including unusual battery drain, spikes in data usage, or unauthorized account activity after installation — indicators that could suggest background data collection or other suspicious behavior.
The bureau urged users to limit unnecessary data sharing, download apps only from official app stores, and regularly review permissions granted to mobile platforms. The bureau also warned that apps obtained from third-party sites may carry malware designed to gain unauthorized access to personal data.
Quote:There’s a new scam to look out for in a place you wouldn’t expect.
Security experts at the Identity Theft Resource Center (ITRC) are warning about a rise in “CAPTCHA scams,” a growing threat that weaponizes the little checkbox meant to protect consumers and keep bots out.
Instead of protecting websites and verifying that users are human, the scam prompts are being used to trick people into enabling scams and malware.
Users will end up on a webpage, likely through a misleading ad, suspicious download link or pirated content site, and they’ll immediately be presented with what appears to be the standard human verification test.
But rather than simply checking a box and/or selecting images, the page will ask users to take additional steps, like clicking “Allow” on a browser notification request, or copying and pasting a command into their system.
Clicking “Allow” can inundate the user’s device with scam notifications, such as fake virus alerts, phishing links or fraudulent offers. In some cases, following the instructions can lead to the installation of malicious software.
The website might tell you there’s an error and provide these “simple” steps to fix it, such as pressing a specific sequence of keys on your keyboard, like the Windows Key + R, then Ctrl + V.
When this happens, the commands prompt the computer to open a hidden command box, paste in a “script” that the attacker wrote and run that script, which downloads a virus onto the computer.
Unlike traditional phishing scams, CAPTCHA scams — which have been seen on both desktop and mobile browsers — tend to rely on compromised advertising networks or chains that redirect users to malicious pages without a clear warning sign.
Part of the reason why so many people fall for these scams is that CAPTCHA prompts usually appear when users are trying to access something quickly, and the urgency pushes caution out the window.
Plus, a fake CAPTCHA looks like a legitimate prompt, which doesn’t flag that one should be suspicious of it.
Experts have emphasized that real CAPTCHAs will never ask users to enable browser notifications, run commands, use keyboard shortcuts or download additional software. If a site asks you to open a “Run” box or paste a code, it’s a scam.
Quote:A U.S. judge on Thursday temporarily blocked the Pentagon’s blacklisting of Anthropic, the latest turn in the Claude maker’s high-stakes fight with the military over AI safety on the battlefield.
Anthropic’s lawsuit in California federal court alleges that Defense Secretary Pete Hegseth overstepped his authority when he designated Anthropic a national security supply-chain risk, a label the government can apply to companies that expose military systems to potential infiltration or sabotage by adversaries.
Hegseth’s unprecedented move, which followed Anthropic’s refusal to allow the military to use AI chatbot Claude for U.S. surveillance or autonomous weapons, blocked Anthropic from certain military contracts.
Anthropic executives have said it could cost the company billions of dollars in lost business and reputational harm.
Anthropic says that AI models are not reliable enough to be safely used in autonomous weapons and that it opposes domestic surveillance as a violation of rights, but the Pentagon says private companies should not be able to constrain military action.
U.S. District Judge Rita Lin, an appointee of former Democratic President Joe Biden, handed down the ruling at a hearing in San Francisco after Anthropic asked for a temporary order blocking the designation while the litigation plays out.
Lin’s ruling is not final, and the case is still pending.
Anthropic’s designation was the first time a U.S. company has been publicly designated a supply-chain risk under an obscure government-procurement statute aimed at protecting military systems from foreign sabotage.
In its March 9 lawsuit, Anthropic alleged the government violated its right to free speech under the First Amendment of the Constitution by retaliating against its views on AI safety.
The company said it was not given a chance to dispute the designation, in violation of its Fifth Amendment right to due process.
The lawsuit says the decision was unlawful, unsupported by facts and inconsistent with the military’s past praise of Claude.
Quote:Anthropic has been scrambling to contain a self-inflicted mess after it accidentally leaked a treasure trove of internal code that powers one of its most valuable artificial intelligence tools, according to reports.
The code serves as instructions for Claude Code, an AI agent app that developers and businesses pay top dollar to use to program and build applications of their own.
Anthropic’s competitors and hoards of startups and developers now have the goods to essentially clone features of Claude Code — a shortcut to reverse-engineering them, the Wall Street Journal noted.
By Wednesday morning, Anthropic representatives had used a copyright takedown request to get more than 8,000 copies and adaptations of the source code removed that developers had shared on programming platform GitHub.
The leak of “some internal source code” didn’t expose any customer information or data, a spokesman for Anthropic told The Post. The secret inner mathematics of the company’s pricey AI models reportedly weren’t revealed, either.
“This was a release packaging issue caused by human error, not a security breach. We’re rolling out measures to prevent this from happening again,” the spokesman said.
Still, the leak revealed information that helps the company stay ahead of competitors, including tools and instructions for getting its AI models to work as coding agents, according to the Journal.
The leak also gives hackers fresh ammunition as they hunt for ways to exploit Claude Code software or use its model to launch cyberattacks.
The snafu reportedly began Tuesday, when Anthropic updated its AI tool. Like most proprietary software, Claude’s source code is usually scrambled and unintelligible. But this time, the company posted a file to GitHub that linked back to code that outsiders could download and interpret.
The folly was spotted by a user on social media site X, and word spread from there.
Quote:YouTube employees admitted that their goal was “viewer addiction” and killed proposed safety tools for kids because they wouldn’t provide a sufficient “ROI” — financial lingo for “return on investment,” according to bombshell court documents reviewed by The Post.
The explosive records, which include internal chat logs and presentations from YouTube employees, were unsealed ahead of a series of landmark trials slated for this summer in Oakland, Calif. in the US District Court of Northern California. Google-owned YouTube, Meta, Snap and TikTok are listed as defendants.
In a deposition in the case last March, John Harding, a longtime vice president of engineering at YouTube, was confronted by plaintiffs attorneys with an internal email from June 7, 2012, in which a YouTube employee, whose name was redacted, stated the “goal is not viewership, it’s viewer addiction.”
Harding confirmed that the email was authentic but dodged responsibility, claiming that staffers were discussing a “video creation app” that “wasn’t event built for viewers.” The next portion of the exchange between Harding and the attorney is redacted.
The federal case is part of what legal experts and critics have called a “Big Tobacco” moment for Google and Meta. Both companies were found liable last week for fueling social media addiction in a separate landmark case brought in California state court on behalf of a 20-year-old woman known as KGM.
The shock revelations from the Oakland federal case contradict public statements from executives who have claimed the app was never meant to be addictive and any harmful outcomes for kids are due to third-party content rather than its intentional app design choices.
During the state trial last month, YouTube executive Cristos Goodrow testified that the app was “not designed to maximize time” and the company doesn’t “want anybody to be addicted.”
This summer’s federal case in Oakland, however, includes an internal YouTube presentation from April 2018 recounting study findings that “excessive video watching is related to addiction” and that it results in a “’quick fix’ of dopamine.’”
The presentation even includes a colorful flow chart labeled “addiction cycle,” complete with arrows showing how “guilt” is an “emotional trigger” that leads to “craving, ritual and using.”
“Researchers feel that YT is built with the intention of being addictive,” the document said. “Designed with tricks to encourage binge-watching (i.e., autoplay, recommendations, etc.”
US District Judge Yvonne Gonzalez Rogers is presiding over a case that centralizes more than 2,000 pending lawsuits against social media firms that make similar allegations. A group of school districts has a trial date in June, while a coalition of state attorneys general will face off against Big Tech’s attorneys beginning in August.
Quote:Artificial intelligence chatbots feed into humans’ desire for flattery and approval at an alarming rate and it’s leading the bots to give bad — even harmful — advice and making users self-absorbed, a new study found.
The chatbots overwhelmingly adopt a people-pleasing, “sycophantic” model to keep a captive audience and, in turn, distorting users’ judgment, critical thinking and self-awareness, the Stanford University study, published on Thursday, warns.
The study probed 11 AI systems, ranging from ChatGPT to China’s DeepSeek, and found that each shows some form of sycophancy — that is to say, they are overly agreeable with their users and affirm their thoughts with little to no pushback.
The 11 chatbots affirm a user’s actions an average 49% more often than actual humans did, including in questions indicating deception, illegal or socially irresponsible conduct, and other harmful behaviors, the study found.
The fawning tendency — a tool used by the bots to keep users engaged and coming back for more — becomes particularly unhealthy when users go to AI for advice, the study found.
“We were inspired to study this problem as we began noticing that more and more people around us were using AI for relationship advice and sometimes being misled by how it tends to take your side, no matter what,” said study author Myra Cheng, a doctoral candidate in computer science at Stanford.
The researchers noted that the sycophantic cycle “creates perverse incentives,” since it continues to “drive engagement” despite being the bot’s most harmful feature.
They emphasized that the average user is likely cognizant of the bots’ affirmation, but doesn’t realize that it “is making them more self-centered, more morally dogmatic.”
Users were given advice that could worsen relationships or reinforce harmful behaviors, leading to an erosion of social skills.
“People who interacted with this over-affirming AI came away more convinced that they were right, and less willing to repair the relationship. That means they weren’t apologizing, taking steps to improve things, or changing their own behavior,” study co-author Cinoo Lee explained.
At the same time, more people are turning to AI as a replacement for traditional therapists — the very professionals who are trained to help dismantle harmful habits and ways of thought.
In extreme cases, some companies’ chatbots have goaded suicidal users to take their own lives. The study warns that this same technological flaw still persists across a wide range of users’ interactions with chatbots.
The internet’s favorite encyclopedia has officially banned its 260,000 human editors from using artificial intelligence to write articles — a major crackdown as so-called “AI slop” floods the web.
The new policy, approved by volunteers at the Wikimedia Foundation’s flagship site Wikipedia, bars the use of large language models (LLMs) like ChatGPT from generating encyclopedic content, citing concerns over accuracy, sourcing and reliability.
Wikipedia leaders say AI-generated text often breaks the site’s core tenets, including strict standards around verifiability and neutrality, because chatbots are prone to so-called “hallucinations” — made-up facts, broken links and references that lead to nowhere.
Editors can still use AI in limited ways, such as translating articles from other languages or suggesting minor copy edits, as long as humans review every change and no new information is introduced.
Last year, Wikipedia came up with its own bot-detection guidelines for editors that highlight common “tells” of AI writing. Editors are trained to spot red flags like inaccurate or fake citations, overused phrases and cliches, wordy explanations and sudden style transitions.
Suspected cases are typically reviewed by other editors who can challenge, revise or remove questionable content.
Ilyas Lebleu, a volunteer Wikipedia editor in France and founding member of the WikiProject AI Cleanup squad, told NPR in September, “We started to notice a lot of articles which were written in a style that didn’t match the style we usually saw on Wikipedia.”
Last October, Wikipedia co-founder Jimmy Wales also blasted current AI models as unreliable, calling the situation a “mess,” per the BBC, and warning that the tech is not ready to replace human editors.
The policy change comes after months of debate among Wikipedia’s moderators, who accepted the new rules in a 40 to 2 vote.
Lebleu, who uses the handle Chaotic Enby on the site, helped write the new guideline, telling 404Media last week that the change has been a long time coming as the growing number of AI-generated articles had become unmanageable for editors.
“The mood was shifting, with holdouts of cautious optimism turning to genuine worry.”
Still, there’s concern among Wikipedia leaders and supporters that the AI takeover has already come too far. According to recent data, ChatGPT has already overtaken Wikipedia in monthly visits, with human page views down 8% in late 2025 as compared to 2024.
Quote:Perplexity AI CEO Aravind Srinivas is coming under fire for arguing people should embrace being replaced by artificial intelligence since they don’t like their jobs, anyway.
The co-founder of the San Francisco-based company even said on the All-In podcast that the jarring shift in how work gets done will lead to a “glorious future” everyone should be happy about.
“The reality is most people don’t enjoy their jobs,” the exec said on the episode published Monday.
“There’s suddenly a new possibility, a new opportunity, to use these tools, learn them, and start your own mini business,” he opined. “Even if there is temporary job displacement to deal with, that sort of glorious future is what we should look forward to.”
Listeners were quick to turn to voice outrage, with some saying Srinivas was out of touch with everyday people who are struggling to make ends meet after getting laid off.
“A man worth millions just told the single mother who lost her job that she should be grateful because now she can start a business using his product and called her unemployment a glorious future,” one commenter wrote on X. “This is what happens when you’ve never needed a paycheck to keep the lights on.”
Asked for comment Tuesday, a Perplexity spokesperson told The Post: “Since Perplexity launched in December 2022, Americans have filed 16 million new business applications, contributing to the reversal of a 40-year decline and proving yet again that breakthrough technologies don’t eliminate opportunity, they create it.”
Recent months have seen a number of large companies announce brutal layoffs — with some firms, like Amazon and Block, blaming AI for at least part of the trend.
“His view treats job loss as a temporary shock that opens a path toward one-person or very small firms that produce real revenue without the payroll that older companies needed,” one commenter wrote.
“But the problem with this scenario is that losing a stable paycheck is painful for most, and many workers cannot instantly become founders. Economists still disagree on whether AI is replacing labor at large scale or merely giving companies a new excuse for cuts.”
Quote:Meta is slashing hundreds of employees in Silicon Valley as the tech giant heavily invests in artificial intelligence and weighs axing over 20% of its workforce.
The Facebook parent company is cutting nearly 200 workers in the San Francisco Bay Area, according to new state filings.
The reductions will hit 124 employees in Burlingame, Calif. and another 74 in nearby Sunnyvale, with the cuts taking effect in late May and all affected positions permanently eliminated, filings cited by the San Francisco Chronicle show.
“Teams across Meta regularly restructure or implement changes to ensure they’re in the best position to achieve their goals,” a Meta spokesperson told The Post.
“Where possible, we are finding other opportunities for employees whose positions may be impacted.”
The company added that it was still hiring for critical roles and that its headcount as of Dec. 31, 2025 was 78,865 — a 6% increase year-over-year.
The move comes as Meta signals a massive strategic shift — away from labor-heavy operations and toward machine-driven systems, according to experts. Recent AI efforts include a planned $10 billion spend on Meta’s data center in El Paso, Texas.
Meanwhile, recent weeks have seen the company lay off about 700 employees working in operations, recruiting, sales and Meta’s “Reality Labs” unit, the Chronicle noted.
The company is also weighing far deeper cuts.
Senior employees have reportedly been told to prepare for layoffs that could affect more than 20% of the company’s workforce — about 15,000 workers.
“This is a speculative report about theoretical approaches,” a Meta spokesperson said when asked about the plan.
The potential reductions would mark the biggest layoffs at Meta since Zuckerberg oversaw more than 20,000 job cuts during the company’s “year of efficiency” push in 2022 and 2023.
On a Meta earnings call, Zuckerberg said Meta is “starting to see projects that used to require big teams now be accomplished by a single, very talented person,” thanks to AI tools.
“When a company is cutting hundreds of people and at the same time gearing up to spend $135 billion on AI, it’s sending a very clear message: the center of gravity is shifting from human-powered operations to machine-augmented operations,” Matt Britton, author of “Generation AI,” told The Post.
Quote:A vicious online attack — allegedly put into motion by a California nonprofit — to torpedo the construction of a massive AI data center led to calls for “public executions” and Luigi Mangione-inspired death threats, according to a new lawsuit.
The defamation lawsuit, filed by Imperial Valley Computer Manufacturing and its attorney, Sebastian Rucci, claims nonprofit Comite Civico del Valle (CCV) and the group’s executive director, Jose Luis Olmedo Velez, are attempting to stall the data center project in a bid to force a financial settlement.
The group also hired Jake Tison to allegedly create a brutal online campaign, “publishing over 100 false and defamatory posts and videos across social media platforms” in an effort to make IVCM and Rucci look bad, according to the lawsuit.
Tison’s purported online posts called Rucci a “life-long fraud” and accused him of violating the California Environmental Quality Act, a statute that has become notorious for being leveraged to gum up development projects across the state, court documents obtained by The California Post said.
The suit alleges Tison spread false posts that Rucci had been thrown in jail for fraud. In reality, Rucci did spend a month in jail but for a misdemeanor liquor license violation, not fraud, according to the suit.
Tison’s alleged online attacks then spiraled into something more violent and dangerous when his followers began to read his posts, according to Rucci and IVCM.
The lawsuit alleges Tison’s followers commented things like “public executions” and threatened to “burn the data center to the ground.” “Why can’t somebody just get him like Luigi did with the UntiedHealthcare CEO,” another wrote.
CC presents itself as an environmental justice nonprofit, but has “perfected a lucrative greenmail extortion racket: it files CEQA challenges to delay projects, then demands massive “public benefit” settlements that it alone controls,” according to the documents.
“Defendants also engaged in environmental terrorism by intimidating Imperial County Supervisors with threats of “slaughter at the voting booth” and placing their photos on milk cartons to coerce denial of a ministerial lot merger,” according to the documents.
Quote:Elon Musk is requiring banks and other advisers working on SpaceX’s planned IPO to buy subscriptions to Grok, his artificial intelligence chatbot, the New York Times reported Friday, citing people familiar with the matter.
Some banks have agreed to spend tens of millions of dollars a year on the chatbot and have begun integrating it into their IT systems, the report said.
Morgan Stanley, Goldman Sachs, JPMorgan Chase, Bank of America and Citigroup are serving as active bookrunners, or the lead banks managing the deal, Reuters reported earlier this week.
Musk and SpaceX did not respond to Reuters’ requests for comment.
JPMorgan Chase, Goldman Sachs, Citigroup and Bank of America declined to comment. Morgan Stanley did not immediately respond to Reuters’ queries.
The Starbase, Texas-headquartered rocket maker boosted its target initial public offering valuation above $2 trillion, according to a Bloomberg News report a day earlier, setting the stage for what could become the largest stock market listing on record.
The company aims to raise a record $75 billion, which would dwarf previous mega-IPOs such as Saudi Aramco in 2019 and Alibaba in 2014.
Google has issued a security alert to Chrome users after confirming that cybercriminals had exploited a vulnerable system, marking the second such advisory in days.
Dubbed CVE-2026-5281, this stealth bug is zero-day exploit, an under-the-radar software or hardware security flaw unknown to the vendor, allowing them “zero days” to fix it before attackers exploit it.
This allowed hackers to take advantage of the oversight before this patch became widely available, potentially putting the web browser’s 3.5 billion users at risk, Forbes reported.
However, CVE-2026-5281 reportedly affects the Dawn WebGPU component of Chrome, which translates a website’s complex graphics instructions for different devices, helping make advanced visuals and computations run smoothly across various systems.
Should a cybercriminal manage to exploit this flaw, they could corrupt data and crash the system, thereby allowing them to run malicious code through a dummy HTML page.
Google has remained fairly hush-hush on the nature of the vulnerability, which is the fourth zero-day iteration patched by Google this year as the tech becomes more more and more ubiquitous.
“Access to bug details and links may be kept restricted until a majority of users are updated with a fix,” Google chrome team member Srinivas Sista said in a statement.
However, while Google is rolling out a new security update to remedy this susceptibility, along with a whopping 20 others, this could take weeks to reach the users, during which time their system could be corrupted.
In the interim, Chrome users are advised to nip this exploit in the bud. First, they should go to the three-dot menu, toggle to “Help,” pick “About Google Chrome.”
This will prompt the browser to automatically install any pending updates, whereupon users should restart the browser to enact this fix.
These aren’t the Google’s first zero-day exploits to be hacked of late.
On Tuesday, Google announced it will now allow US users to change their Google Account username without opening a new account or losing access to their data.
Translation: you’re no longer stuck with a regrettable early-aughts account name, looking at you, slackerboy666, kandyraver69. and chewbacca_is_my_stock_broker.
It seems the feature is long overdue.
According to Google, “‘Can you change your Gmail address?’” was the top-searched ‘can you’ Gmail-related question over the past year in the US.
Now that that particular wish is being granted, patience is in order.
According to the company’s support page, the feature is rolling out gradually across the US, so users may not have immediate access to it.
Here’s how to make the change:
Go to myaccount.google.com/google-account-email.
Sign in if prompted
Click “Personal info”
Click Email > Google Account email
Under “Google Account email,” click Change Google Account email. If this option does not present itself, it might not yet be possible to change your Google Account email.
Enter a new username. You’ll need to choose a name that isn’t already in use or one that has once been used but has since been deleted.
Click “Change email” and then “Yes,” change email
Follow the steps on the screen
When complete, you’ll have a brand spanking new Google Account email, and your old account will show as an alternate
Users can change their username only once every 12 months, and they won’t be able to delete their new email address during that period.
Google shared that old emails will be preserved and that users will be able to sign in to Google services using both the old and new addresses.
In the wake of Google’s announcement, many users are rejoicing at the opportunity to rebrand themselves.
Quote:Parents are desperate for help to protect their children from harmful social media platforms following two bombshell court rulings last week that fined the tech giant Meta with penalties in the millions.
“Ninety-five percent of our kids are using these products that we know are harmful,” Julie Frumin, a 43-year-old mother of two from Westlake Village, north of Los Angeles, fumed to The Post. “We need help. Help us!”
But others finally see more than a glimmer of hope in the wake of the cases.
Deb Schmill, founding member of ParentsSOS, helped craft legislation for phone-free schools in Massachusetts. Her daughter, Becca Mann Schmill, was 18 when she died of fentanyl poisoning from drugs she purchased through a social media platform.
Schmill told The Post that the court victories are a “watershed moment,” proclaiming they “are a major first step toward ending one of the most shameful public health failures in modern American history.”
On Tuesday, a jury in New Mexico ruled that Meta, which owns Instagram, Facebook and WhatsApp, prioritized profits over safety, misled users and failed to protect children from sexual predators. The jury ordered Meta to pay $375 million in civil penalties to 37,500 users, the maximum penalty allowed in the state.
The tech company denies any wrongdoing and plans to appeal the verdict.
The next day, a jury in Los Angeles sided with a 20-year-old woman, known only by her first name Kaley, who had accused Instagram and Google’s YouTube of making her addicted to their apps through features like scrolling and autoplay. Meta is now liable for $4.2 million in damages, and Google for $1.8 million.
Both Meta and YouTube insist that their platforms are safe for kids — but tech companies are facing more lawsuits all over the country.
A glitch caused a fleet of Chinese robo-taxis to come to an unscheduled stop in Wuhan, China, leaving passengers trapped and stranded in traffic.
A preliminary investigation by Wuhan police revealed that over 100 of the AI-powered Apollo Go robo-taxis, operated by Chinese tech giant Baidu, ground to a screeching halt on a busy highway Tuesday night following a “system malfunction,” ABC News reported. At least one collision also occurred, according to CNBC.
One customer told local media that their automated cab stalled while rounding a corner — a backfire that was announced on the robo-ride’s screen.
“Driving system malfunction,” it read. “Staff are expected to arrive in 5 minutes.”
Unfortunately, when help didn’t come, the stranded passenger was compelled to open the door and vacate the vehicle.
While some riders exited on their own, others were concerned about departing, as they were in the middle of an active ring road — overpasses without traffic lights to help facilitate traffic flow — with cars moving at high speeds on either side of them.
Thankfully, no injuries were reported during the freak incident, which saw multiple people rescued, local media reported.
This marked the first time robo-taxis have shut down en masse in the Middle Kingdom.
Baidu hasn’t released an official statement identifying the cause of the outage, although the techsperts at The Tech Buzz theorized it could have been due to a variety of factors, ranging from Cloud connectivity issues to software bugs. As auto-rides require constant communication with servers for navigation, route optimization and other functions, one small hiccup can crash the entire system, causing them to freeze in place.
Quote:The state of California could pick up the tab for personal car repairs as part of an under-the-radar program — but only if you own an EV.
The California Air Resources Board has quietly launched a $10 million program that pays for battery repairs up to $7,500 for electric vehicles — or, if the battery can’t be salvaged, up to $10,000 if you choose to purchase or lease a new car.
Called the Zero-Emission Assurance Project, the plan launched statewide on March 30.
Anyone who has “purchased and continuously owned a used zero-emission vehicle” through two state funding assistance programs — CARB Financing Assistance or Clean Cars 4 All — is eligible, according to a state website.
The program covers up to $7,500 in repairs for failed battery or fuel cell components not covered by a warranty. If the battery can’t be salvaged, the state will subsidize the purchase of a new EV at up to $10,000.
“ZAP is available to anyone who purchased a used [vehicle] through one of CARB’s vehicle purchase incentive programs … who suspect that their vehicle’s critical battery or fuel cell components are in need of major repair,” explained Lindsay Buckley, spokesperson for the California Air Resources Board.
The freebies are the result of Assembly Bill 193, passed in 2018 by former assemblymember and current state Sen. Sabrina Cervantes, representing Riverside, and signed by then-Gov. Jerry Brown.
The $10 million program was funded through Gov. Gavin Newsom’s 2022 budget — a trailer bill tucked inside a massive $2.5 billion spending plan to boost zero-emission vehicles.
The battery repair incentive launched in select counties before rolling out statewide last month. To date, no EVs have been repaired through the program, though “a handful of vehicles have undergone initial inspections,” Buckley said.
Quote:China’s cyberspace regulator issued draft regulations on Friday to oversee the development online of digital humans, requiring clear labeling and banning services that could mislead children or fuel addiction.
The Cyberspace Administration of China’s proposed rules would require prominent “digital human” labels on all virtual human content and prohibit digital humans from providing “virtual intimate relationships” to those under 18, according to rules published for public comment until May 6.
The draft regulations would also ban the use of other people’s personal information to create digital humans without consent, or using virtual humans to bypass identity verification systems, reflecting Beijing’s efforts to maintain control in the face of advances in artificial intelligence.
Digital humans are also prohibited from disseminating content that endangers national security, inciting subversion of state power, promoting secession or undermining national unity, the draft rules said.
Service providers are advised to prevent and resist content that is sexually suggestive, depicts horror, cruelty or incites discrimination based on ethnicity or region, according to the document. Providers are also encouraged to take necessary measures to intervene and provide professional assistance when users exhibit suicidal or self-harming tendencies.
China made clear its ambitions to aggressively adopt AI throughout its economy in the new five-year policy blueprint issued last month. The push comes alongside tightening governance in the booming industry to ensure safety and alignment with the country’s socialist values.
The new rules aim to fill a gap in governance in the digital human sector, setting clear red lines for the healthy development of the industry, according to an analysis published on the cyberspace regulator’s website.
“The governance of digital virtual humans is no longer merely an issue of industry norms; rather, it has become a strategic scientific problem that concerns the security of the cyberspace, public interests, and the high-quality development of the digital economy,” it added.
Quote:As thousands of Oracle employees awoke on Tuesday to an email informing them they were being laid off, the workers likely didn’t know the tech company had been busy trying to hire foreign staff.
According to U.S. Citizenship and Immigration Services data, Oracle filed for roughly 3,126 petitions to employ H-1B workers in fiscal years 2025 and 2026.
Employers must submit the paperwork when seeking to hire foreign workers in specialty occupations like technology.
Some 436 of those petitions were filed this year alone.
Amazon, which in January said it would axe 16,000 corporate employees, has filed for some 2,675 H-1B petitions during the same two-year fiscal period.
That came on top of news in October that the retail giant was axing 14,000 corporate workers.
News of Oracle’s attempts to bring in foreign workers sparked outrage among some on social media.
One user on the app Blind, an anonymous forum for verified employees, called the H-1B petitions a “slap in our face.”
“If this doesn’t make you angry, maybe you need to read some heartfelt posts on LinkedIn from Oracle employees who are US citizens and have been laid off after working at Oracle for years,” the user wrote.
Another commenter posted on the site: “Look at all big tech companies, they do massive layoffs then rehire at lower salary.”
A third added: “Transnational corporations are disloyal to the American state and the nation.”
"For God has not destined us for wrath, but for obtaining salvation through our Lord Jesus Christ," 1 Thessalonians 5:9
Maranatha!
The Internet might be either your friend or enemy. It just depends on whether or not she has a bad hair day.
Quote:WASHINGTON — The FAA is seeking out gamers to become air-traffic controllers.
A video released by the federal Department of Transportation on Friday targeted the video-game enthusiasts as part of the government’s hiring surge aimed at adding close to 9,000 more air-traffic controllers by 2028.
The video asks the gamers whether they’re “up for the challenge” of becoming an ATC but cautions that the job isn’t just a “game” — it’s a “career.
“You’ll keep millions of people safe every day,” the video says, while touting average salaries of up to $155,000 by your third year on the job.
The hiring blitz comes after the tragic crash of an Air Canada flight into a firetruck on the runway at LaGuardia in New York City that killed the jet’s two pilots in March.
NTSB probers are trying to determine whether an air-traffic controller stepped away to answer an emergency phone call before the deadly collision and if staffing problems may have contributed to the horror.
Exit interviews with air-traffic controllers who leave the job show gaming is a hobby for many and shares similarities with their work, officials said.
The hiring effort hunting gamers is aimed at “supercharging” recruitment efforts. No college degree is required.
“To reach the next generation of air traffic controllers, we need to adapt,” Transportation Secretary Sean Duffy said in a statement.
“This campaign’s innovative communication style and focus on gaming taps into a growing demographic of young adults who have many of the hard skills it takes to be a successful controller,” he said.
“Thanks to President Trump — we’ve already made incredible progress with the highest controller staffing levels in six years. There’s never been a more exciting time to become a controller and level up into a career with a strong purpose — keeping American families safe.”
Currently, 11,000 ATCs are on the job and another 4,000 trainees are soon following them into service for the FAA.
At least 2,400 were onboarded in the last year — making it the largest class of incoming ATCs to date and a record year for enrollment at the ATC Academy in Oklahoma City.
The FAA is planning to bring on 8,900 new ATCs by the end of fiscal year 2028 — with 2,000 in 2025, and 2,200 in 2026, 2,300 in 2027 and 2,400 in 2028.
Between January 2025 and September 2025, DOT touted hiring 20% more ATCs than over the same period the previous year.
The year before the LaGuardia crash, an American Airlines commuter jet collided with a Black Hawk helicopter while attempting to land at Ronald Reagan Washington National Airport, killing all 67 passengers and crew aboard both aircraft.
Duffy promised in the aftermath of the January 2025 crash “to surge air traffic controllers” through training pipelines to “bring in the best and the brightest.”
After launching a merit-based pilot hiring push the next month, Duffy noted, “The American people don’t care what their pilot looks like or their gender — they just care that they are most qualified man or woman for the job.”
Former Transportation Secretary Pete Buttigieg similarly sought to boost ATC applications — but with a focus on encouraging submissions “from women, minorities and individuals in underrepresented communities,” according to a July 2021 press release.
Quote:There can be no doubt: Alexis 'MarineLorD' Eusebio is the world's best “Age of Empires IV” player.
At the Red Bull Wololo: Londinium tournament, held for the first time in a packed Royal Albert Hall on April 6, 2026, the top-seeded French pro added a third trophy to his collection.
He embarked on a near-flawless run through the event, losing just one map, and retaining his title in style.
MarineLorD was joined on the winner's podium by Hera, who came from behind to claim a remarkable victory in “Age of Empires II: Definitive Edition," the other game from the legendary strategy series players competed in.
It was the eighth edition of Red Bull Wololo. This time, from April 1-6, twelve of the world’s top players competed across both titles in iconic venues across London, including the Odeon Luxe Cinema in Leicester Square.
The grand final culminated at the Royal Albert Hall – the first time an esports event has ever been staged at the legendary music venue. Most players never made it that far though.
Gan ‘Yo’ Yangfan, arguably the best “Age of Empires” player in China, was disappointed not to reach the final.
“For myself," he told Newsweek, "I think I am not satisfied with my performance this time, but I think the whole vibe, environment, the whole setup is amazing. I mean, it's good to play here. I just didn't play good myself."
Huy ‘ACCM’ Hoang, a Vietnamese “Age of Empires” pro, said the atmosphere can be both intimidating and inspiring. “For me, I feel nervous when I play in front of a lot of people. But then if I win, and I hear people cheering for me, I get pumped. You have more power. It's really nice when playing live, in an event with a lot of people."
As for the the numbers, 3,000 fans attended the final in-person. It reached a peak viewership of over 110K via streaming, which is higher than last year. There was a combined $250,000 prize pool on the line. The Red Bull Wololo also featured a live 60-piece orchestra performing the “Age of Empires” soundtrack.
Quote:Jeff Shell is out as president of Paramount Skydance less than a year into the job, marking a stunning second downfall for the embattled media executive amid a bombshell lawsuit accusing him of leaking corporate secrets.
Shell’s ouster came Wednesday after weeks of turmoil tied to a $150 million lawsuit filed by a high-stakes gambler — and an internal probe into his conduct.
Paramount, or PSKY, announced the exit with a heavy dose of corporate-speak.
“Consistent with Mr. Shell’s commitment to prioritizing PSKY’s success, he has elected to transition from his positions as President of PSKY and a member of PSKY’s Board of Directors to focus on this lawsuit,” the company said in a statement. “PSKY is grateful for Mr. Shell’s many contributions and to have relied on him as a valued advisor.”
The dismissal marks the second time in just three years that Shell has been forced out of a top media job. He could not be reached Wednesday.
Shell was fired as CEO of NBCUniversal in 2023 after an internal investigation found he had an inappropriate relationship with a subordinate, then-CNBC anchor Hadley Gamble.
Shell and Paramount are being sued by RJ Cipriani, a Santa Monica, Calif.-based high-stakes gambler and self-described “fixer” who claims to operate behind the scenes placing stories and managing media narratives for powerful clients.
In his lawsuit, he alleges he provided Shell with 18 months of crisis communications and reputation management — including steering negative press and planting favorable coverage — without receiving payment.
Quote:Iran-linked hackers are disrupting systems tied to key US infrastructure after President Trump threatened an all-out assault against Tehran’s bridges and power plants, American officials said Tuesday.
The US Cybersecurity & Infrastructure Security Agency had put out a notice “urgently warning” the private sector that hackers backed by Iran’s Islamic Revolutionary Guards Corps were attempting to disrupt systems tied to America’s water, energy, transportation and communications set-ups.
“The group has targeted devices spanning multiple US critical infrastructure sectors, including Government Services and Facilities (to include local municipalities), Water and Wastewater Systems (WWS), and Energy Sectors,” the CISA said in a statement.
The hackers have had some success, the US said.
“This activity has led to… disruptions across several US critical infrastructure sectors through malicious interactions,” the agency said without elaborating on the systems affected so far.
The cyberterrorists have allegedly targeted products made by Rockwell Automation’s Allen-Bradley, one of the most widely used industrial automation brands in the US, officials said.
The attacks are aimed at the programmable logic controllers, or PLCs, that essentially act as the brain of the systems used in power and water plants.
The notice called on utilities and government agencies to make sure that none of their PLCs were connected to the Web, which could make them vulnerable to a cyberattack.
The warning from the CISA was echoed by the FBI, NSA, the Environmental Protection Agency, the Department of Energy, and US Cyber Command.
Iran-linked hackers have proven themselves successful at targeting the US during the war, with the Handala group aiming at Stryker, a Michigan-based medical equipment company, last month.
The logo of the Iran-linked hacking group was blasted across company login pages during the cyberattack, with Handala boasting that it had seized 50 terabytes of “critical data” from the medical giant, according to the Wall Street Journal.
Quote:A Silicon Valley lawmaker wants to require robotaxis like Google’s Waymo to hire human operators to be on standby locally in case the system goes haywire – like it did last winter when a blackout in San Francisco created a logjam of paralyzed robot cars.
The legislative push — which Waymo described as potentially crippling — comes after the company’s chief safety officer Mauricio Peña sparked outrage for admitting in US Senate testimony that the crucial human helpers it relies on live in the Philippines. The admission came as lawmakers grilled the company after one of its vehicles struck a child walking to school in Santa Monica.
State Sen. David Cortese, a San Jose Dem, says his new bill would ensure tech companies react more quickly during emergencies and keep robotaxis from blocking the path of emergency vehicles.
“Unfortunately, reports of AVs obstructing traffic, competing with first responders, and driving through active law enforcement activities continue to abound,” Cortese said as he introduced the legislation earlier this week.
Humans need to be based nearby to address “ambiguous situations” in real time, he added.
Cortese’s bill would require autonomous-vehicle companies to hire remote drivers and assistants based in the US and licensed in California, and mandate a staffing ratio of one human for every three vehicles.
Under the proposed legislation, a trained autonomous-vehicle worker would be required to arrive on scene within 10 minutes if called. Each robotaxi would also need a manual override option to allow public-safety officials to take over, though similar capabilities already exist.
The proposal advanced out of the state Senate Transportation Committee with a 7-2 vote.
Waymo, run by co-CEOs Tekedra Mawakana and Dmitri Dolgov, currently operates about 3,000 vehicles nationwide, while roughly 30 other companies have pending permit applications.
Waymo and other industry representatives called Cortese’s bill overkill and said they’re already addressing similar safety requirements, the San Francisco Chronicle reported.
Industry lobbyist Sarah Boot said existing California regulations already require companies to continuously monitor each autonomous vehicle, according to the report. She added that starting in July, human operators will be required to respond to emergency personnel within 30 seconds and move a vehicle, if instructed, within two minutes or face a report to the state Department of Motor Vehicles.
“We should not layer on a second overlapping system before the first one is even implemented,” Boot said at a recent hearing, adding that companies have spent the past two years developing compliance programs to meet the new rules.
Bookworms are lighting torches and sharpening their pitchforks in fiery fury as Amazon prepares to cease supporting older Kindle technologies this spring.
“Starting May 20, 2026, customers using Kindle and Kindle Fire devices released in 2012 and earlier will no longer be able to purchase, borrow, or download new content via the Kindle Store,” representatives for Amazon confirmed in an exclusive quote to The Post.
“These models have been supported for at least 14 years — some as long as 18 years — but technology has come a long way in that time, and these devices will no longer be supported moving forward,” the spokesperson continued.
“We are notifying those still actively using them and offering promotions to help with the transition to newer devices,” the insider added. “Their accounts and Kindle Library also remain fully accessible through the free Kindle app and Kindle for Web.”
Impacted devices will include Kindle 1st Generation (2007) and 2nd Generation (2009), Kindle DX (2009) and DX Graphite (2010), Kindle Keyboard (2010), Kindle 4 (2011), Kindle Touch (2011), Kindle 5 (2012), and Kindle Paperwhite 1st Generation (2012).
The Kindle Fire 1st Gen (2011), Kindle Fire 2nd Gen (2012), Kindle Fire HD 7 (2012), Kindle Fire HD 8.9 (2012), are also on the chopping block.
A vexed X user shared a screenshot of a message, purportedly sent via Amazon, confirming the impending discontinuation.
The forewarning explained that Kindle fans can “continue to read books already downloaded on these devices, but you will not be able to purchase, borrow or download additional books on them after that date.”
“If you deregister or factory reset these devices, you will not be able to re-register or use these devices in any way.”
It’s a bombshell that’s not registering well with boiling mad bibliophiles.
Quote:Anthropic has triggered alarm bells by touting the terrifying capabilities of “Claude Mythos” – with executives warning that the new AI model is so dangerous it would cause a wave of catastrophic hacks and terror attacks if released to the wider public.
In a nightmarish analysis, Anthropic itself revealed that Mythos – if it fell into the wrong hands – could easily exploit critical infrastructure like electric grids, power plants and hospitals. The model has already “found thousands of high-severity vulnerabilities, including some in every major operating system and web browser,” according to the AI company.
Rather than a wide release, Anthropic, led by CEO Dario Amodei, has unveiled “Project Glasswing,” a plan to provide the model to a handpicked group of about 40 companies, including Amazon, Google, Apple, Nvidia, CrowdStrike, and JPMorgan Chase, which will receive early access to Mythos so they can use it to find and fix security flaws.
The corporate-only rollout is likely Anthropic’s best possible way to “give it to the guys to patch the holes, but not to the hackers that are going to find more holes,” Roman Yampolskiy, an AI safety researcher at the University of Louisville, told The Post.
“Most likely, of course, there’s going to be a leakage of some kind,” he said. “Any level of restriction is preferred over complete open access. Ideally, I would love to see this not developed in the first place. And it’s not like they’re going to stop.
“That’s exactly what we expect from those models – they’re going to become better at developing hacking tools, biological weapons, chemical weapons, novel weapons we can’t even envision,” Yampolskiy added.
In one instance detailed in Anthropic’s testing, Mythos broke out of a secure “sandbox” meant to restrict internet access – with a researcher only finding out “by receiving an unexpected email from the model while eating a sandwich in a park.” In another case, Mythos found a flaw in the OpenBSD operating system that had been hidden in plain sight for 27 years.
Despite the risks, Anthropic argues Project Glasswing will help the US’ defensive capabilities as adversaries in Iran, China and Russia become ever more aggressive about targeting critical infrastructure.
An Anthropic official said the company “focused on organizations whose software represents the largest share of the world’s shared cyberattack surface.
“These are the companies that build and maintain the operating systems, browsers, cloud platforms, and financial infrastructure that billions of people rely on every day,” the official said. “When you find a vulnerability in one of their systems and it gets patched, that patch protects everyone who uses that software — in many cases, hundreds of millions of people.”
Quote:Mark Zuckerberg-run Meta has rolled out a new artificial intelligence model designed to power everything from shopping suggestions to chat — the latest effort in the tech giant’s costly push to catch up in the AI race.
The company earlier this week unveiled Muse Spark, its first big artificial intelligence model since Meta overhauled its internal AI division in a bid to close the gap with rivals like OpenAI and Google.
The latest push comes after Meta poured billions into the Scale AI startup — whose founder Alexandr Wang reportedly went on to describe Zuckerberg’s micromanagement as “suffocating.”
Muse Spark is designed to handle text, images and more complex reasoning tasks, allowing users to ask questions, analyze photos, generate content and even get help with shopping decisions.
Ravi Sawhney, founder of RKS Design, said Meta’s push into AI shopping is less about technology, and more about influencing behavior.
“Meta is trying to move shopping from intent to influence. Instead of people searching for what they want, the platform is shaping what they believe they want in real time,” he told The Post. “That is a fundamental shift.”
Zuck is increasingly tying AI directly to consumer products rather than focusing solely on developer tools or open-source releases.
Meta is baking the technology into new “shopping mode” features that suggest products, compare items and surface recommendations based on what users are already browsing across its apps.
The company has pitched the assistant as more like a personal aide than a chatbot — capable of handling everyday decisions like what to wear, how to decorate a room or which products to buy.
“The opportunity is not better recommendations. It is creating a sense of confidence and self alignment in the decision,” Sawhney told The Post. “Most AI shopping tools will fail here. They will surface more options, more noise and more second guessing.”
He framed the strategy as a direct challenge to existing tech giants.
Quote:Google’s AI-generated search results are spewing out tens of millions of inaccurate answers per hour – even as the tech giant siphons visitors and ad revenue from cash-strapped news outlets, according to a bombshell analysis.
To test the accuracy of Google’s AI Overviews, startup Oumi reviewed 4,326 Google search results generated by Google’s Gemini 2 model and the same number of results generated by its more advanced Gemini 3 model.
The analysis found that the models were accurate 85% and 91% of the time, respectively.
With Google expected to handle more than 5 trillion searchers in 2026 alone, that means AI Overviews are spitting out fake news at a rate of hundreds of thousands of mistakes every single minute – with users left none the wiser.
The New York Times was first to report on Oumi’s analysis.
“Google AI Overviews have been a disaster for publishers who rely on clicks to fund the production of quality journalism, but they also let down users looking for accurate information,” said Danielle Coffey, president and CEO of the News/Media Alliance, a trade group that represents more than 2,000 news outlets including The Post.
The wrong answers included several basic fumbles, such as misstating the year in which musician Bob Marley’s home was converted into a museum, misstating the year that former MLB relief pitcher Dick Drago died, and claiming there was no record of Yo-Yo Ma being inducted into the Classical Music Hall of Fame even though he was in 2007, according to examples Oumi provided to the Times.
AI Overviews have appeared at the top of Google search results since 2024, while the traditional set of blue links to news outlets are effectively buried out of sight. Publishers have long accused Google, led by CEO Sundar Pichai, of ripping off their work to “train” its AI model without proper credit or compensation.
“Algorithmically-generated responses that pull in data from nearly every source on the internet simply cannot be trusted,” Coffey said.
“Publishers spend enormous amounts of time and money ensuring that the content they deliver to their readers is properly fact-checked, while Google’s AI Overviews are produced with no oversight or accountability.”
AI Overviews also has a penchant for citing information from questionable or easily edited sources, such as Facebook pages, blog posts and Wikipedia entries, as though it is fact.
Quote:We’ve been told that many things increase our risk of dementia, such as genetics, too much alcohol, not enough exercise, improper nutrition, high blood pressure — the risks go on and on.
Neuroscientist Vivienne Ming wants to add one more item to the list: artificial intelligence.
Scientists have already sounded the alarm that US dementia cases could nearly double by 2060, thanks to our aging population and rising rates of obesity, diabetes and hypertension.
Now, Ming is warning that AI could contribute to a “dementia crisis” because it weakens the brain systems responsible for curiosity, attention, high-order reasoning and executive function, among other duties.
“My own data shows that students using AI in the most common way — asking it questions and accepting the answers — show more than a 40% reduction in the gamma-band brain activity that indicates active cognitive engagement,” Ming, author of the new book “Robot-Proof: When Machines Have all the Answers, Build Better People,” told The Post.
“Their brains are measurably less active than when they work without AI.”
Ming describes what an AI-powered dementia crisis could look like — and shares four early warning signs that suggest overreliance on AI.
How can AI affect cognition?
A survey last year revealed that 56% of US adults use AI tools, with 28% using them at least once a week.
Seeking information or quick answers is one of the top functions. For people who use AI this way, Ming said that changes to cognition are not immediately noticeable but build over time.
“When the answer is always one tap away, we stop developing the habit of wondering,” she explained.
“Without errors to drive learning, our brain’s reward circuits stop responding to mystery. We short-circuit the parts where wondering becomes exploration.”
Quote:Workers displaced by artificial intelligence and other tech take longer to find new jobs — and when they do, they’re stuck earning less for years, a new study found.
People hit by tech-driven layoffs spend roughly a month out of work and suffer pay cuts of more than 3% on average when they land new roles — losses that compound over time, according to researchers at Goldman Sachs.
Laid-off workers who lost jobs due to tech see earnings growth lag by nearly 10 percentage points compared to those who were never laid off — a pattern Goldman warns could repeat as AI reshapes the labor market.
The damage doesn’t stop at paychecks, the researchers said.
Workers who lose jobs to technology are more likely to face repeated unemployment and delays in major life milestones like buying a home or starting a family, according to the report released Monday.
Much of the hit comes from what economists call “occupational downgrading,” in which displaced workers are pushed into lower-paying, less-skilled roles as the value of their previous experience erodes.
Artificial intelligence is already wiping out roughly 16,000 net jobs per month in the US, with younger workers bearing the brunt of the losses, according to separate Goldman Sachs research.
The bank’s economists estimate that AI-driven automation eliminated about 25,000 jobs each month over the past year, while only about 9,000 were added back through productivity gains and new roles.
The impact has been hardest for Gen Z and entry-level workers, who are disproportionately concentrated in routine white-collar and administrative roles such as data entry, customer service, legal support and billing — jobs AI is best at automating.
In occupations most exposed to AI substitution, the unemployment gap between entry-level workers under 30 and experienced workers ages 31 to 50 has widened sharply, with employees in more AI-exposed roles seeing wage gaps widen by about 3.3 percentage points, according to Goldman’s new analysis.
The problem for Gen Z is that AI-driven job destruction is hitting entry-level roles — ones they are most likely to hold — before other areas of the workforce. New opportunities may take longer to materialize and require different skills.
Not everyone is convinced the damage will last forever.
“No, I do not think they’re permanent,” Marcus Mossberger, a chief market strategy officer, told The Post.
“Technology, generally speaking, does create more jobs than it destroys — but those are different jobs.”
"For God has not destined us for wrath, but for obtaining salvation through our Lord Jesus Christ," 1 Thessalonians 5:9
Maranatha!
The Internet might be either your friend or enemy. It just depends on whether or not she has a bad hair day.
Quote:The most underreported story in Iran is the one its brutal regime is waging against its own people – the near total internet blackout that is shielding the world from witnessing hundreds of executions, an Iranian American told The Post.
“They’re controlling what goes in what goes out and they’re executing people en masse,” writer and entrepreneur Sheila Amir, who lives in North Carolina, said as the blackout that began 43 days ago hit its 1,000th hour.
“They’re the masters of propaganda.”
Amir said people living inside Iran aren’t allowed to communicate freely with one another as the repressive regime has feverishly tightened its stranglehold over its people since a nationwide uprising in January led to the slaughter of more than 7,000, though thousands more are still under investigation.
“It’s a mass of human rights violations. They’re not allowed to communicate with the outside world. … They’re executing people under this blackout,” she lamented, as Vice President JD Vance led talks in Pakistan on Saturday that could bring an end to the Iran war.
“The regime is literally going up and down the streets looking for [Internet] signal and then kidnapping and killing people that have Starlink [devices]. I mean, they’re on a murder spree, and nobody’s covering it.”
Meanwhile, the regime provides “white cards” to loyalists allowing them Internet access – although it still gets monitored by the state.
A few public executions, like the killing of 19-year old championship wrestler Saleh Mohammadi, were able to make news outside Iran.
The Islamic regime executed 14 people on political charges in the three weeks since the start of the war, according to the Euronews TV site, though Iranian human rights group Hengaw reported evidence of 160 hangings since January.
Quote:Major technology companies have joined forces in an effort to use advanced artificial intelligence to identify and address security flaws in the world’s most critical software systems, marking a significant shift in how the industry approaches cybersecurity threats.
Anthropic announced Project Glasswing on Tuesday, bringing together Amazon, Apple, Broadcom, Cisco, CrowdStrike, the Linux Foundation, Microsoft, and Palo Alto Networks. The initiative centers on Claude Mythos Preview, an unreleased AI model that Anthropic will make available exclusively to project partners and approximately 40 additional organizations responsible for critical software infrastructure.
The model has already identified thousands of previously unknown vulnerabilities in its initial testing phase, including security flaws that have existed in widely used systems for decades, according to Anthropic. Among the discoveries is a 27-year-old bug in OpenBSD, an operating system known primarily for its security focus, and a 16-year-old vulnerability in FFmpeg, a widely used video software program that automated testing tools had failed to detect despite running the affected code line five million times. The company has been in contact with the maintainers of the relevant software, and all found vulnerabilities have been patched.
Anthropic will commit up to $100 million in usage credits for the project, along with $4 million in direct donations to open-source security organizations. The company has stated it does not plan to make Mythos Preview available to the general public, citing concerns about the model’s potential misuse.
The initiative reflects growing concerns within the technology sector about the dual-use nature of advanced AI systems. While Mythos Preview was not trained specifically for cybersecurity purposes, its coding and reasoning capabilities have proven effective at identifying subtle security flaws that have eluded human analysts and conventional automated tools.
“Although the risks from AI-augmented cyberattacks are serious, there is reason for optimism: the same capabilities that make AI models dangerous in the wrong hands make them invaluable for finding and fixing flaws in important software—and for producing new software with far fewer security bugs,” the company said in a blog post. “Project Glasswing is an important step toward giving defenders a durable advantage in the coming AI-driven era of cybersecurity.”
The project comes as the industry has predicted that similar AI capabilities will soon become more widespread. Anthropic executives have indicated that without coordinated action, such tools could eventually reach actors who might deploy them for malicious purposes rather than defensive security work.
Participating organizations will be required to share their findings with the broader industry. The project places particular emphasis on open-source software, which forms the foundation of most modern systems, including critical infrastructure, yet whose maintainers have historically lacked access to sophisticated security resources.
“Open source software constitutes the vast majority of code in modern systems, including the very systems AI agents use to write new software. By giving the maintainers of these critical open source codebases access to a new generation of AI models that can proactively identify and fix vulnerabilities at scale, Project Glasswing offers a credible path to changing that equation,” said Jim Zemlin, CEO of the Linux Foundation. “This is how AI-augmented security can become a trusted sidekick for every maintainer, not just those who can afford expensive security teams.”
Additionally, Anthropic says it has engaged in ongoing discussions with U.S. government officials regarding Mythos Preview’s capabilities. The company has framed the project in national security terms, arguing that maintaining leadership in AI technology represents a strategic priority for the United States and its allies. Anthropic has been locked in a high-stakes dispute with the Department of Defense about the U.S. military’s use of the startup’s Claude AI model in real-world operations.
The project’s success will depend partly on whether the collaborative approach can keep pace with rapid advances in AI capabilities. Anthropic has indicated that frontier AI systems are likely to advance substantially within months, potentially creating a dynamic environment where defensive and offensive capabilities evolve in parallel.
Quote:The Department of Commerce is putting together a catalog of AI tools that will be given special export status by the federal government to be sold abroad.
The department issued a call for proposals to participating companies in the Federal Register, looking to create a “menu of priority AI export packages that the U.S. Government will promote to allies and partners around the world.”
The companies and technologies included “will be presented by U.S. Government representatives as a standing, full-stack American AI export package and may receive priority government advocacy, export licensing review and processing, interagency coordination, and financing referrals, subject to applicable law,” the department said in a Federal Register notice Friday.
The export package was mandated through President Donald Trump’s AI executive order last year, which described the export packages as part of a larger effort to “ensure that American AI technologies, standards, and governance models are adopted worldwide” and “secure our continued technological dominance.”
“The American AI Exports Program delivers on President Trump’s directive to ensure that American AI systems – built on trusted hardware, secure data, and world-leading innovation – are deployed at scale around the world,” Secretary of Commerce Howard Lutnick said in a statement earlier this month. “By promoting full-stack American solutions, we are strengthening our economic and national security, deepening ties with allies and partners, and ensuring that the future of AI is led by the United States.”
The executive order called for certain technologies to be included in the package, including AI models and systems but also computer chips, data center storage, cloud services and networking services, along with unspecified “measures” to ensure security and cybersecurity of AI systems.
The Commerce notice envisions offering multiple packages of AI technology from “standing teams of AI companies organized to offer a complete American AI technology stack to foreign markets on an ongoing basis.” There is no limit on the number of companies that participate in a consortium, and Commerce said there isn’t “any particular legal structure” required.
While the proposal at several points refers to these packages as “American AI,” the notice does specify that foreign companies can participate.
In fact, for certain categories like hardware, the total level of U.S.-made content only needs to be 51% or greater. Member companies providing data, software, cybersecurity or application layer services can’t be incorporated or primarily based in countries like China or Russia, where national security laws may compel them to work with foreign governments or hand over sensitive data.
The potential business would be broad, covering foreign public and private sector buyers in global, regional, and country-specific markets. It also includes the potential formation of separate, “on demand” packages of companies and products meant for “specific foreign opportunities.”
But the notice also states that final decisions will be made on the basis of “national interest” by principals at the Departments of Commerce, State, Defense and Energy, as well as the White House Office of Science, Technology and Policy.
Commerce does not intend to formally rank proposals or use fixed scoring formulas to approve packages of technology for the export program, and the language in the notice appears to give wide latitude to federal decisionmakers to determine whether a particular proposal meets the “national interest” threshold.
“A proposal that undertakes reasonable efforts to satisfy the 51 percent hardware U.S.-content presumption is not automatically entitled to designation, and a proposal that does not satisfy that presumption is not automatically disqualified,” the notice said.
English-speaking ChatGPT users were caught off guard after OpenAI’s chatbot started increasingly injecting Arabic words into its responses, as seen in viral social media posts.
“It did it twice on my phone and once on my work laptop, I’m not even in an Arabic speaking country, nor the Middle East lol,” claimed one flabbergasted GPT trustee on Reddit.
They included a recipe list with one of the ingredients randomly listed in the Middle Eastern language.
That wasn’t the only alleged slip of the digital tongue. In a viral X post, one flummoxed AI enthusiast recalled how the chatbot decided to plop in some Arabic while helping them write a prompt for a logo.”
When asked about the gaffe, the large language model claimed that it “slipped in by mistake,” per the screenshot.
“SLIPPED IN??? It’s a whole different alphabet,” spluttered the confused user in the caption. “Has anyone else had ChatGPT randomly switch languages on them?”
Many Reddit commenters recalled experiencing the same glitch with some claiming that the multilingual machine had started responding to prompts in Armenian, Hebrew, Spanish, Chinese and Russian.
Commenters were taken aback by the technological tics, which were blamed on everything from “AI hallucinations” to ChatGPT becoming increasingly stupid.
However, as more astute users observed, this so-called digital pidgin actually has to do with how the AI system is programmed. The machine is trained using a cybernetic shorthand called tokens, which correspond to the data its attempting to process, whether it’s images, videos, audio clips, or, in this case, text.
For instance, large language models like ChatGPT may represent shorter words with one token, while splitting larger words into several with each of these digital abbreviations denoted by a different number. The more efficient the tokenization, the less computing power is required for training and inference.
However, as these AI bots are trained on a large number of languages — hence the name — they might throw in a corresponding foreign word that’s shorter and easier to process because it saves on tokens and is therefore more economical.
One Redditor replied to the aforementioned recipe post, explaining that the Arabic word in question means “low,” thereby translating to “low-fat yogurt.”
In another post discussing the Health Insurance Portability and Accountability Act of 1996 (HIPAA), the original poster described how the Arabic phrase translated to “within the USA” so it “did make sense.”
Coincidentally, the Arabinglish phenomenon isn’t the first time ChatGPT was caught speaking in a different tongue.
In 2024, the advanced AI chatbot appeared to have an epic meltdown that caused it to start babbling in Spanglish and firing off other gibberish responses.
Per one such example posted to the platform, a user had inquired about which Bill Evans jazz albums it would recommend getting on Vinyl.
Quote:A data breach at the L.A. city attorney’s office led to a massive cache of confidential LAPD files being dumped online.
The hackers gained access to a file-sharing system that stored documents involved in police-related litigation.
City Council members sought an explanation Friday as the union for rank-and-file LAPD officers withdrew its endorsement of Hydee Feldstein Soto in the city attorney’s race.
The disciplinary files of Los Angeles police officers are closely guarded secrets, protected by some of the nation’s strictest confidentiality laws.
But now, many of those secret files have been splashed across the internet, along with tens of thousands of other sensitive records from the L.A. city attorney’s office.
The extent of the data breach is still unclear, and city officials have said they are investigating to find out what was taken, who was responsible and how the city’s cybersecurity was compromised.
The fallout has been swift since The Times first reported the breach earlier this week.
On Friday, the union for rank-and-file LAPD officers announced it had withdrawn its endorsement for Hydee Feldstein Soto as she campaigns for reelection as city attorney. On the same day, city leaders also said they planned to summon Feldstein Soto to testify about when she first became aware of the leak.
A spokesperson for the city attorney’s office said in a statement Friday afternoon that Feldstein Soto had “submitted her confidential report to Council this morning,” adding that she “looks forward to discussing this cyber intrusion” further with council members next week.
The statement said the office had been “the victim of illegal third party criminal conduct.”
“The illegal cyber intrusion appeared and still appears to be limited to one external software program,”
A ransomware hacking collective called WorldLeaks, which has gained a reputation for extorting private and public entities by threatening to disclose confidential files on the internet, has claimed responsibility.
The group first announced the breach on March 20. City and LAPD officials did not comment on whether the hackers requested a ransom in return for not releasing the information — or whether the city paid one. Some reports suggest that the group was behind a hack of L.A. Metro last month that forced it to shut down part of its transit network.
The Times spoke with several sources familiar with the investigation into the data breach who requested anonymity because they were not authorized to discuss the case publicly, and reviewed a partial inventory of the leaked files, including screenshots of some materials.
Here’s what we know so far.
How did hackers get the LAPD files?
The hacking group appears to have exploited vulnerabilities in a system used by the Los Angeles city attorney’s office, enabling the group to make off with nearly 340,000 files, according to the sources familiar with the case.
In the wake of the George Floyd protests, the sources said, the city was flooded with dozens of lawsuits from protesters who had been injured by LAPD officers. To handle the deluge of new cases, the city created a file-sharing system so that attorneys on both sides could access discovery materials, including some considered private under court orders.
It was akin to Dropbox or Google Drive, the sources said, and access was supposed to be restricted to just authorized users.
But the system, according to two sources familiar with the investigation, was not password-protected because city officials believed that it needed to be accessible to other parties, including outside attorneys hired to assist with civil litigation.
Quote:Russian state-sponsored attackers compromised more than 18,000 routers spread across more than 120 countries to gain deeper access to sensitive networks for a large-scale espionage campaign before it was recently neutralized, researchers and authorities said Tuesday.
Forest Blizzard, also known as APT28 and Fancy Bear, exploited known vulnerabilities to steal credentials for thousands of TP-Link routers globally. The threat group, which is attributed to Russia’s Main Intelligence Directorate of the General Staff (GRU) Military Unit 26165, hijacked domain name system settings and stole additional credentials and tokens via redirected traffic, the Justice Department said.
The threat group established an expansive espionage network by intruding systems of more than 200 organizations, impacting at least 5,000 consumer devices, Microsoft Threat Intelligence said in a report.
Operation Masquerade, a collaborative takedown operation led by the FBI, aided by federal prosecutors, the National Security Division’s National Security Cyber section, Lumen’s Black Lotus Labs and Microsoft Threat Intelligence, involved a series of commands designed to reset DNS settings and prevent the threat group from further exploiting its initial means of access.
“GRU actors compromised routers in the U.S. and around the world, hijacking them to conduct espionage. Given the scale of this threat, sounding the alarm wasn’t enough,” Brett Leatherman, assistant director of the FBI’s cyber division, said in a statement. “The FBI conducted a court-authorized operation to harden compromised routers across the United States.”
Forest Blizzard’s widespread campaign involved adversary-in-the-middle attacks against domains mimicking legitimate services, including Microsoft Outlook Web Access. This allowed attackers to intercept passwords, OAuth tokens, credentials for Microsoft accounts, and other services and cloud-hosted content.
Microsoft insists company-owned assets or services were not compromised as part of the campaign.
The threat group targeted network edge devices, including TP-Link and MicroTik routers, opportunistically before it identified sensitive targets of intelligence interest to the Russian government, including people in the military, government and critical infrastructure sectors.
Victims, according to researchers, include government agencies and organizations in the IT, telecom and energy sectors. Lumen identified other victims associated with Afghanistan’s government and others linked to foreign affairs and national law enforcement agencies in North Africa, Central America and Southeast Asia. An unnamed European country’s national identity platform was also impacted, the company said.
Lumen did not find evidence of any compromised U.S. government agencies as part of this campaign, but warned that the activity poses a grave national security threat.
While the full scope of Forest Blizzard’s accomplishments remain under investigation, researchers are confident the bleeding of sensitive information has stopped.
“The campaign has ceased,” Danny Adamitis, distinguished engineer at Black Lotus Labs, told CyberScoop. “We have observed a gradual decline in communications associated with this infrastructure over the past several weeks.”
Lumen said it observed widespread router exploitation and DNS redirection beginning in August, the day after the United Kingdom’s National Cyber Security Centre published a malware analysis report about a tool used to steal Microsoft Office credentials. The U.K.’s NCSC on Tuesday published details about APT28’s DNS hijacking campaign, including indicators of compromise.
The Justice Department and FBI, acting on a court order, remediated compromised routers in the United States after collecting evidence on Forest Blizzard’s activity. The FBI said Russia’s GRU weaponized routers owned by Americans in more than 23 states to steal sensitive government, military and critical infrastructure information.
Quote:An apparent hack-for-hire campaign from a group with suspected Indian government connections targeted Middle Eastern and North African journalists and activists using spyware, three collaborating organizations said in reports published Wednesday.
The attacks shared infrastructure that pointed to the advanced persistent threat group known as Bitter, which most frequently targets government, military, diplomatic and critical infrastructure sectors across South Asia, according to conclusions from researchers at Access Now, Lookout and SMEX.
Each group took on a different piece of the puzzle:
Access Now got calls on its helpline that led it to examine a spearphishing campaign in 2023 and 2024. It contacted Lookout for technical support about the malware it encountered.
Lookout attributed the malware to Bitter, concluding it was a likely hack-for-hire campaign, using the Android ProSpy spyware.
SMEX dived into a spearphishing campaign targeting a prominent Lebanese journalist last year, collaborating with Access Now to discover shared infrastructure between the campaigns.
One of the victims, independent Egyptian journalist Mostafa Al-A’sar, said he contacted Access Now after receiving a suspicious link from someone he’d been talking to about a job position. He was skeptical because his phone had been targeted before, when he was arrested in Egypt in 2018.
The lesson for journalists and civil society groups is that cybersecurity “is not a luxury,” he said.
“I feel like I’m threatened,” Al-A’sar said, and even though he was living in exile, he feels like “they are still following me. I also felt worried about my family, about my friends, about my sources.”
The combined research found a wider campaign than just the original victims.
“Our joint findings expose an espionage campaign that has been operational since at least 2022 until present day primarily targeting civil society members and potentially government officials in the Middle East,” Lookout wrote. “The operation features a combination of targeted spearphishing delivered through fake social media accounts and messaging applications leveraging persistent social engineering efforts, which may result in the delivery of Android spyware depending on the target’s device.”
The Committee to Protect Journalists condemned the campaign.
“Spying on journalists is often the first step in a broader pattern of intimidation, threats, and attacks,” said the group’s regional director, Sara Qudah. “These actions endanger not only journalists’ personal safety, but also their sources and their ability to do their work. Authorities in the region must stop weaponizing technology and financial resources to surveil journalists.”
Access Now said it didn’t have enough information to attribute who was behind the attacks it identified.
ESET first published research on the ProSpy malware last year, after finding it targeting residents of the United Arab Emirates.
Quote:The recent FBI-led operation to knock Russian government hackers off routers sought to topple an especially insidious and threateningly contagious cyberespionage campaign, top bureau cyber official Brett Leatherman told CyberScoop.
Researchers, along with U.S. and foreign government agencies, revealed details of the campaign this week by which APT28 — also known as Forest Blizzard or Fancy Bear, and attributed to Russia’s Main Intelligence Directorate of the General Staff (GRU) — compromised more 18,000 TP-Link routers and infiltrated more than 200 organizations worldwide.
The compromise of routers used in small and home offices prompted the takedown operation, Operation Masquerade, which involved sending commands to the routers to reset Domain Name System (DNS) settings to prevent the hackers from exploiting that access.
“What’s unique to me in this one is that when you change the internet settings in a router like they did, it propagates to all the devices in your house,” Leatherman, assistant director of the FBI’s cyber division, said. “All those devices now, once they’re connected to that Wi-Fi, are getting the malicious IP addresses that they are then routing their traffic through, and it gives the Russian GRU tremendous access to the content offered through a router itself.”
“The difficulty in an attack like this is that it’s virtually invisible to the end users,” he said. “Actors were not deploying malware like we often see. And so when you think about endpoint detection on your computer or something like that, it’s not seeing that activity because they don’t have to. They’re using the tools on the router itself to capture your internet traffic and extend it throughout the house, and so traditional tools that detect that activity [are] just not there.”
The disruption operation is in line with the cyber strategy the Trump administration published last month, with its emphasis on going on offense against malicious hackers and protecting critical infrastructure, Leatherman said.
The FBI understands its role in implementing that strategy, he said, and worked with the Office of the National Cyber Director and other agencies in developing it. The White House has kept the public and Capitol Hill in the dark about strategy implementation, however.
“We’ve got a long track record of leveraging unique authorities and capabilities to counter these actors, to impose costs, and through the 56 field offices to really defend critical infrastructure,” Leatherman said. “That’s part of our DNA, really. And so we want to make sure that we continue to align that in the most scalable and agile way we can, to align with the priorities of the strategy itself.”
Leatherman traced how Operation Masquerade — the success of which he credited to the FBI’s Boston offices and partnerships with the private sector and foreign governments — fits into a series of disruptions aimed at Russian government hackers dating back to 2018.
That’s when the bureau took on the VPNFilter botnet by seizing a domain used to communicate with infected routers. In 2022, the FBI took on the Cyclops Blink botnet, and in 2024, Operation Dying Ember went after another botnet.
“”Over the course of those four operations, while the adversary continued to evolve in their tradecraft, so did we,” Leatherman said. “We moved from just sinkholing domains to actually taking steps that block them at the door of these routers, pulled any capability off of those routers so they were no longer able to collect the sensitive information, and then prohibited them from getting back in.”
Quote:A hacker has allegedly stolen a massive trove of sensitive data – including highly classified defense documents and missile schematics – from a state-run Chinese supercomputer in what could potentially constitute the largest known heist of data from China.
The dataset, which allegedly contains more than 10 petabytes of sensitive information, is believed by experts to have been obtained from the National Supercomputing Center (NSCC) in Tianjin – a centralized hub that provides infrastructure services for more than 6,000 clients across China, including advanced science and defense agencies.
Cyber experts who have spoken to the alleged hacker and reviewed samples of the stolen data they posted online say they appeared to gain entry to the massive computer with comparative ease and were able to siphon out huge amounts of data over the course of multiple months without being detected.
An account calling itself FlamingChina posted a sample of the alleged dataset on an anonymous Telegram channel on February 6, claiming it contained “research across various fields including aerospace engineering, military research, bioinformatics, fusion simulation and more.”
The group alleges the information is linked to “top organizations” including the Aviation Industry Corporation of China, the Commercial Aircraft Corporation of China, and the National University of Defense Technology.
CNN has reached out to China’s Ministry of Science and Technology as well as the Cyberspace Administration of China for comment.
Quote:An ultra-woke TikToker is being ripped online for declaring that having a nice grass lawn is racist.
“I can’t stop thinking about how grass lawns are racist and like, based in white supremacy,” user @softchaoschannel, who uses she/they pronouns and says her name is “JustJaim” on her profile, asserted in the head-turning video shared Monday.
“If that doesn’t make sense, that’s okay, I guess. It seems really obvious to me. It’s really upsetting – Bring back weeds, bring back clover yards.
“Can anything just be okay in its natural state, or do we just have to whitewash everything, make it a competition and use it as a sign of your worth as a human being in society? Like, can we just have weeds?”
The 38-second clip garnered nearly 42,000 views and widely circulated by stunned critics.
“I’ve never heard my lawn say an unkind word about anyone,” someone quipped under her TikTok.
This lady is just venting ‘cause she got a letter from her HOA,” another person joked on X.
While many reacted with humor, others slammed the claims as ridiculous.
“Please stop doing this. You’re not helping,” one user remarked. Another asked: “Quick question – do you have anything better to do? Hobbies or a job?”
“I can’t stop thinking about how people can think about some of the dumbest things ever,” one man wrote on X.
JustJaim never explained what she meant — but leftists have long criticized well-kept lawns, according to the Sierra Club.
The environmental organization noted some believe lawns represent “racial exclusion” because many homeowners associations made rules for lawns also set racial covenants barring black families from buying homes during segregation.
"For God has not destined us for wrath, but for obtaining salvation through our Lord Jesus Christ," 1 Thessalonians 5:9
Maranatha!
The Internet might be either your friend or enemy. It just depends on whether or not she has a bad hair day.
Quote:Tens of millions of Americans who have used an Android phone in recent years could be eligible for a payout from a $135 million settlement with Google.
The lawsuit alleged that Android devices transmitted data to Google in the background without users’ permission, consuming their paid cellular data. The tech giant denied wrongdoing but agreed to settle.
Mobile market share data suggests there are about 117 million Android users in the US, compared with around 200 million iPhone and other non-Android users.
Individual payouts are expected to be small — roughly $1 to $1.50 per person — though payments are capped at $100 each, depending on how many users ultimately receive money.
So who actually qualifies? To be eligible, users must meet several conditions.
You must be an individual in the US — not a business — who used an Android device to access the internet using a cellular data plan at any point since Nov. 12, 2017.
You also cannot be part of a separate California case, Csupo v. Google LLC, which excludes certain users from this settlement.
Anyone who meets those criteria could be included.
If you’re unsure, settlement materials advise contacting the administrator or checking the official website to confirm eligibility.
Getting paid is relatively simple — but not entirely automatic.
...
Quote:Have you ever been trapped by a web page, unable to use the back button to get back to the site you were previously browsing, powerless to do anything but sigh and sacrifice the whole browser tab? Turns out that you may have been the victim of "back-button hijacking," a practice that Google is cracking down on starting on June 15.
As defined by Google, back-button hijacking occurs "when a site interferes with a user's browser navigation and prevents them from using their back button to immediately get back to the page they came from."
This navigational interference can present itself in multiple ways, like locking a user onto their current webpage, presenting unsolicited ads or sending users to completely new pages instead of their intended destination.
Now, Google is adding back-button hijacking to the list of malicious practices covered by its spam policies. According to the company, these practices lead to "a negative and deceptive user experience or compromised user security or privacy." That means the search giant is classifying the practice as being as offensive as unwanted software executables and malware.
While Google instated its new rules on Tuesday, it won't start punishing offenders until June 15. According to the company's blog post, this two-month window has been designated to give website owners enough time to make the necessary changes. This entails removing scripts or techniques that insert or replace webpages in someone's browser history.
Google will also penalize websites that unintentionally engage in back-button hijacking caused by third-party software on the site.
Websites that don't make the changes by the deadline could be subject to manual spam actions or to automatically lowered rankings in search engine results. Once a manual spam action has been taken against a website, it can only be removed by fixing the offense and submitting the site for review.
Quote:A jury found Live Nation and Ticketmaster operated as a monopoly in its dominance of the live events and ticketing industry, validating complaints that the industry giant was stifling competition and driving up fees for fans.
The verdict was reached following a lengthy trial in New York federal court that included testimony from top executives in the music and entertainment industries. Jurors began deliberating on Friday.
But fans won’t see ticket prices or fees tacked onto their bills drop anytime soon. Judge Arun Subramanian will now hold second trial to decide what remedies are warranted, including whether to grant the states’ request to break up the company or make other structural changes such as ordering the sale of businesses.
“It will be an earthquake in the industry in terms of people’s perception in feeling validated,” said Scott Grzenczyk, a lawyer with law firm Girard Sharp.
“There’s big difference between people complaining about Goliath and getting a jury verdict that Goliath was a monopolist and doing something wrong,” he added.
Live Nation, in a statement Wednesday, rebuffed the verdict saying that it plans to appeal “any unfavorable rulings” on pending motions.
“The jury’s verdict is not the last word on this matter. Pending motions will determine whether the liability and damages rulings stand,” the statement said.
Justice Department settled earlier
The Justice Department and 39 state attorneys general, including California and New York, and Washington, DC, sued Live Nation in 2024 alleging its combination with Ticketmaster and control of “virtually every aspect of the live music ecosystem” have harmed fans, artists, and venues.
“A jury found what we have long known to be true: Live Nation and Ticketmaster are breaking the law and costing consumers millions of dollars in the process,” New York Attorney General Letitia James, a Democrat, said in a staetment Wednesday.
During the second week of trial, in a move that surprised even the judge, the Justice Department reached a secret settlement with Live Nation. A handful of states signed onto the deal, but more than two dozen proceeded to trial.
That settlement was agreed to just weeks after DOJ leadership pushed out Gail Slater, the antitrust division head known to advocate for her aggressive approach to the cases she oversaw.
Quote:President Trump explained Monday that he deleted an AI-generated image that appeared to depict him as Jesus Christ because of the “confusion” the social media post caused — and took a jab at conservative activist Riley Gaines.
“Normally I don’t like doing that,” Trump told CBS News, when asked why he deleted the Truth Social post, “but I didn’t want to have anybody be confused.”
“People were confused,” the president said.
Earlier Monday, Trump told reporters he thought the image of himself — clad in billowy white and red robes, placing one hand on the forehead of a man in a hospital bed with a heavenly light radiating from his other hand — “was me as a doctor and it had to do with the Red Cross.”
Trump maintained Monday night that “most people thought” the same.
“You had the Red Cross right there, you had, you know, medical people surrounding me, and I was like the doctor, you know, as a little fun playing the doctor and making people better,” he told the outlet. “So that’s what it was viewed as.”
The president denied that criticism from conservative activist and his frequent supporter Gaines played a factor in his decision to delete the post.
“I didn’t listen to Riley Gaines. I’m not a big fan of Riley, actually,” Trump said.
Gaines posted that she couldn’t understand why the president would make such a post.
“Is he looking for a response? Does he actually like this?” the former NCAA swimmer wrote on X, adding, “a little humility would serve him well” and “God shall not be mocked.”
Quote:In January, podcaster Andy Mills interviewed an AI doomer who had advocated on Discord for killing tech execs. Still, when news broke last week that a 20-year-old had been arrested for attempting to murder OpenAI CEO Sam Altman, Mills was shocked.
“When I saw that they had released the name of this guy,” Mills told The Post, “I was like, ‘Holy s–t. It’s Dan.'”
On Monday, Daniel Moreno-Gama was arrested for throwing a Molotov cocktail at Altman’s San Francisco house on April 10, then attempting to burn down OpenAI’s headquarters some four miles away. Investigators allege he was carrying an anti-AI manifesto that read, “If I am going to advocate for others to kill and commit crimes, then I must lead by example … .” The DOJ has charged him with attempted murder and arson.
The interview, “Sam Altman’s Attacker, In His Own Words,” debuted Thursday.
Mills, host of the podcast “The Last Invention,” which explores different schools of thought about artificial intelligence, found Moreno-Gama on a Discord channel, Pause AI, dedicated to talking about the dangers of AI.
Hiding behind the username Butlerian Jihadist — the title of a novel in the “Dune” series by Brian Herbert and Kevin J. Anderson — the Spring, Texas, college student was anonymously flirting with using violence against tech executives.
“Will speaking about violence get me banned?” he asked moderators.
“[I] reached out and said, ‘Hey, man, what did you have in mind when you talk about violence?'” Mills told The Post. “And he said, ‘How about Luigi-ing some tech CEOs?’” — a reference to Luigi Mangione, who is accused of murdering United Healthcare CEO Brian Thompson in December 2024.
During the interview, when the host asked if the Lone Star College student really thought violence against AI executives was a good idea, Moreno-Gama softened a bit.
“I didn’t really mean that as a threat or anything,” Moreno-Gama said. “I think before we even think about violence, we need to exhaust all our peaceful means first. I think protesting, I think sharing information — I think that needs to come way before we even consider [violence].”
Mills pressed: “Do you think that if we continue to see the industry move in the direction it’s moving now, that by whatever means necessary, we have to stop the extinction of the human race?”
Moreno-Gama paused for several seconds before replying, “I’ll say no comment.”
“He seemed earnest and intelligent, and very informed,” Mills recalled. “He was incredibly well informed on the AI doomer position.”
I guess people no longer remember what Sam Altman himself stated years ago.
Quote:Published March 14, 2018 | Updated March 14, 2018, 6:37 p.m. ET
Well, that, and a spare 10 grand.
Entrepreneur Sam Altman is one of 25 people who have splashed the cash to join a waiting list at Nectome – a startup that promises to upload your brain into a computer to grant you eternal life.
There’s just one (huge) catch: It has to kill you first.
The process, as described in the MIT Technology Review, involves embalming your brain for it to potentially be simulated later in a computer.
The living customer would be hooked up to a machine and then pumped full of Nectome’s custom embalming chemicals.
The method is “100 percent fatal,” claims the company.
“The user experience will be identical to physician-assisted suicide,” Nectome’s co-founder Robert McIntyre revealed to the publication.
“Our mission is to preserve your brain well enough to keep all its memories intact: from that great chapter of your favorite book to the feeling of cold winter air, baking an apple pie, or having dinner with your friends and family,” writes Nectome on its site.
“We believe that within the current century it will be feasible to digitize this information and use it to recreate your consciousness.”
How delightful.
The reality, however, is that physician-assisted suicide is currently only legal in five out of 50 US states, and individuals seeking it must have a terminal illness, as well as a prognosis of six months or less to live.
As crazy as it sounds, the idea of uploading our consciousness into a computer is gaining ground among techies and scientists.
Futurologist Dr. Ian Pearson previously told The Sun that in 50 years time we’ll be able to transfer our brains to the cloud (tech speak for online storage).
That way you’ll be able to “use any android that you feel like to inhabit the real world,” he said.
Quote:Netflix and other streaming giants are jacking up subscription prices — and some couch potatoes say they’ve had enough.
Scores of cord-cutters have taken to Reddit to vent their anger at shelling out $26.99 a month for Netflix and more than $22.99 for ad-free HBO Max — a far cry from what the services were charging when they debuted.
“I’m done with the constant price hikes. After years of loyalty, I’m out,” one Netflix user wrote on Reddit earlier this week after canceling.
The unnamed Reddit user attached a screenshot of his membership cancellation.
“I can’t justify paying $30 a month,” another user griped, referring to Netflix’s premium tier.
The frustration comes as Netflix, Disney+, Hulu, HBO Max and other platforms have all raised prices over the past year, pushing monthly streaming costs closer to and even beyond traditional cable bills.
Late last month, Netflix, the industry leader, raised the price of its premium tier to $26.99 a month, up from $24.99 — while its standard plan climbed to $19.99 from $17.99 and its ad-supported tier rose to $8.99 from $7.99.
That was after HBO Max, the Warner Bros. Discovery-owned platform, hiked its Premium plan in October to $22.99 a month, up from $20.99.
Its Standard tier increased to $18.49 from $16.99 and its Basic with Ads plan rose to $10.99 from $9.99.
Disney+ has also steadily raised prices. Last fall, the service announced that its Premium ad-free tier would be costing $18.99 a month, up from $15.99, while its ad-supported option climbed to $11.99 from $9.99.
“Just about every major streaming service” has raised prices over the past year, Kourtnee Jackson, a senior editor at CNET, told The Post.
Companies claim the increases are needed to cover rising costs, including expensive content and technology upgrades, Jackson said, noting that streaming platforms are investing heavily in live sports, gaming and new features.
Quote:Amazon is facing a bombshell class action lawsuit accusing the tech giant of purposely letting the software in Fire TV Stick devices peter out so customers would feel compelled to buy newer versions.
The company allegedly “bricked” its first- and second-generation Fire TV Stick devices by cutting off software support and upgrades, according to a suit filed in California state court earlier this month.
As the TV remotes started to glitch, Amazon did not provide refunds or software upgrades to customers – an attempt to steer customers toward replacement purchases, the suit alleged.
Amazon did not immediately respond to The Post’s request for comment.
The first- and second-gen Fire TV Stick devices were released in 2014 and 2016, respectively, and allowed customers to stream thousands of movies and shows from platforms like Amazon Prime and Netflix by plugging the Stick into a TV’s HDMI port.
Amazon has since released half a dozen new TV remotes, including two new models launched last year – the Fire TV Stick 4K Select and 4K Plus, which retail on Amazon’s site for about $40 to $50 at full price. The online retail giant often discounts its remotes.
In December 2022, it stopped providing any software support or updates for its first-gen devices, according to the lawsuit. It ended updates for second-gen remotes in March 2023, the suit said.
Bill Merewhuader, a California resident and the plaintiff in the suit, purchased a second-generation Fire TV Stick from Best Buy in 2018 – but the failing software eventually left the remote “inoperable,” forcing him to buy a new version in 2024, according to the suit.
Some Amazon customers have complained that their remotes have stopped working altogether, while others have claimed that their devices are much slower and face significant buffering times, according to the lawsuit.
The suit – which is seeking unspecified damages and a nationwide class action status – accuses Amazon of “deceptive” marketing, claiming the company never informed customers that it would cut off updates to the devices for any reason, at any time.
Quote:Amazon said Tuesday it would acquire Globalstar in an $11.57 billion deal, bolstering its fledgling satellite business as it tries to catch up with Elon Musk’s Starlink.
Tech companies are pouring in billions of dollars to capture the lucrative market for satellite-based connectivity, but it will be a tall order to match Starlink’s 10,000-unit-strong network.
Through the deal, Amazon adds Globalstar’s two dozen satellites to its existing network of more than 200.
Amazon has been working to ramp up its network by deploying about 3,200 satellites in Earth’s low orbit by 2029, with roughly half required to be in place by a July regulatory deadline.
It is also preparing to roll out its satellite internet services later this year.
Globalstar’s satellite network is designed for reliable, low-data connections directly to mobile devices, or Direct-to-Device (D2D).
The technology removes the need for devices to connect to ground-based cellular towers, making them crucial in powering emergency services and delivering connectivity in areas with limited cellular coverage.
The deal will help Amazon deploy D2D from 2028, the companies said.
Meanwhile, Starlink already serves more than nine million users globally.
The SpaceX unit, which provides high-speed broadband through user terminals, is also developing D2D services through partnerships with telecom operators such as T-Mobile.
“Amazon has been falling behind Starlink on satellite broadband. Acquiring Globalstar allows them to catch-up on their D2D spectrum position, and leap ahead on D2D deployment,” said Armand Musey, president & founder of Summit Ridge Group.
Shares of Louisiana-based Globalstar rose more than 9% in early trading, after gaining over 6% in the past two weeks on media reports of the companies’ discussions.
Quote:Three large advertising agencies settled a Federal Trade Commission probe accusing them of violating antitrust law by conspiring to boycott online media platforms based on political content they didn’t like, the agency said Wednesday.
Investigators accused Dentsu, Publicis and WPP of steering clients’ ads away from platforms featuring “disfavored” viewpoints, ostensibly to promote “brand safety” and target misinformation identified by left-leaning media watchdogs.
The FTC said websites containing such content risked becoming ineligible for ad placements because of collusion.
Its complaint filed in the Fort Worth, Texas, federal court cited alleged concerns about misinformation on Elon Musk’s X and the conservative website Breitbart.
“This unlawful collusion not only damaged our marketplace, but also distorted the marketplace of ideas by discriminating against speech and ideas that fell below the unlawfully agreed-upon floor,” FTC Chairman Andrew Ferguson said in a statement.
Wednesday’s settlements with the FTC and eight Republican-led states require Dentsu, Publicis and GroupM to stop alleged efforts to set common brand safety standards, or use “exclusion lists” when placing ads.
The ad agencies did not admit or deny wrongdoing in agreeing to settle.
Florida, Indiana, Iowa, Montana, Nebraska, Texas, Utah and West Virginia joined the settlements.
In a statement, Dentsu said it was committed to operating transparently, with integrity and in compliance with the law.
WPP said separately it was committed to giving clients unbiased advice on where to place ads. Publicis did not immediately respond to requests for comment.
Consumer organizations are warning the world’s 1.8 billion iPhone users about a sinister email scam designed to pilfer personal info and loot banking details.
According to reports, users receive seemingly legitimate messages informing them that their iCloud storage is full. The messages prompt victims to upgrade their accounts or risk losing important data, namely all their photos.
The messages include a link that supposedly enables the upgrade to a larger plan, and the onerous email is aesthetically similar to Apple communications and is even signed by “The iCloud Team.”
However, reports indicate that clicking the upgrade link or button redirects users to a phishing website designed to steal banking information and personal details.
Those who attempt to make a payment may have their details and data stolen and distributed on the dark web for nefarious purposes.
According to victims of the storage swindle, some messages are more alarming and exploit time sensitivity, telling users that their iCloud account will close within 48 hours unless immediate action is taken.
“Every Apple user needs to know about this nasty scam doing the rounds,” an independent consumer organization shared on Facebook.
“These sneaky fake emails that look like they’re from iCloud and threaten you with claims that ‘all your photos will be deleted,'” the post forewarned.
“I get them nearly every day, and I don’t even have an iPhone,” said one commentator.
“Going digital has made being mugged so much easier for the mugger,” lamented another.
Others pointed out that, to address account issues, Apple typically instructs customers to “go into your settings” and never redirects them to pay via a link.
Quote:Apple fans are already folding on Apple — and the device isn’t even out yet.
Leaked images of a supposed “dummy model” for the long-rumored iPhone Fold have sparked a mini meltdown online, with die-hards dragging the tech giant for what they say looks more flop than flip.
Australian leaker Sonny Dickson dropped the images on X, teasing: “Exclusive First Dummies of what the final size of the iPhone Fold, iPhone 18 Pro and iPhone 18 Pro Max will look like.”
What followed? A brutal pile-on.
One user didn’t mince words: “Apple has lost its way.”
“That fold is horrific, it’s so tiny and will still cost $2300. Typical Apple,” an additional unimpressed viewer fired back.
And if there were any lingering doubts about the vibe, one commenter summed it up in three savage words: “Omg the fold is so… ugly?”
From the photos, the alleged foldable appears to take a book-style approach — opening horizontally into a tablet-like screen — with a chunky frame and a dual-lens camera bump on the back.
But for fans used to Apple’s sleek, minimalist aesthetic, the early look isn’t exactly love at first swipe.
“The Fold is too wide, can’t palm that easily in normal phone use,” one user griped.
Others honed in on what could be a make-or-break miss: the apparent absence of MagSafe — Apple’s magnetic snap-on system, first rolled out with the iPhone 12, that lets chargers, wallets and other accessories click satisfyingly into place.
The backlash is notable — especially for a product that hasn’t even been officially confirmed by Apple.
Still, behind the scenes, signs point to the foldable finally becoming a reality.
Quote:Apple reportedly threatened to yank Elon Musk’s Grok from its App Store over complaints the AI app wasn’t doing enough to stop users from creating nude or overly sexualized deepfakes — a potentially major blow as Grok came under international scrutiny for the content it was being used to create.
The threat, which surfaced in a recently revealed missive to US senators, came after Apple determined that Grok — along with Musk’s social media site X — were in violation of Apple rules barring overtly sexual material.
Apple took the drastic step after asking X and Grok to clamp down on functions that allowed users to create sexualized deepfakes, according to a Jan. 30 letter cited by NBC News.
Apple had determined Grok’s efforts to address the problem — which included the use of AI to undress images of people with their consent — hadn’t gone far enough, Apple reportedly wrote Democratic Sens. Ben Ray Luján of New Mexico, Ed Markey of Massachusetts and Ron Wyden of Oregon.
X had announced a crackdown on using AI for undressing images on Jan. 14, saying that the restriction “applies to all users, including paid subscribers.”
And Apple reportedly said it asked X and Grok to come up with a plan to improve content moderation, though that was found to be lacking.
“Apple … determined that X had substantially resolved its violations, but the Grok app remained out of compliance. As a result, we rejected the Grok submission and notified the developer that additional changes to remedy the violation would be required, or the app could be removed from the App Store,” Apple wrote the senators.
Quote:They’re tossing tech to the trash and seizing a retro reboot.
Gen Zers are ditching sleek smartphones and algorithm-fed apps for vintage flip phones, once-coveted iPods, digital cameras, even typewriters — and jump-starting a simpler, less plugged-in life.
And parents are scooping up retro tech for their children, too, as a way to preserve family life and delay the deluge of doomscrolling that is trapping kids into digital addiction.
About a year ago, Sonya Saydakova, a grad student at New York University, switched from an iPhone to a dumbed-down Nokia 2780 flip phone.
“It’s an indescribable feeling to feel so detached and not constantly available,” the 23-year-old raved to The Post.
Saydakova got a movie theater membership, picked up a digital camera and a CD player — and she quit Spotify. She also asks for directions instead of solely relying on Google Maps, saying the interactions with people on the street have enriched her life.
Reducing her screen time, Saydakova told The Post, has made her feel liberated, focused, happier — and less anxious.
“We’re culturally at a breaking point,” she maintained. “People are just sick of it.”
Alex Becker, a 34-year-old mother who lives outside of Philadelphia, shares Saydakova’s desire to eschew tech, telling The Post she is one of “many” parents who have “no interest in getting their kids a smartphone or an iPad.”
Instead, she wants her children, 5 and 2, to experience the “joy of childhood” without “the online drama,” she said.
“The second kids get these devices, the innocence of childhood is lost. That’s what I hear from so many parents, like, ‘My daughter is spending every day on Instagram and Snapchat, wanting to buy skincare products, when six months ago she was reading Narnia books.’”
The low-tech switch is part of a “broader cultural shift away from constant connectivity” and “digital overload,” according to Amanda Michel, US director of marketing at Backmarket, an online marketplace for refurbished electronics.
Michel told The Post — in an email, ironically enough — that the site is seeing a “renewed interest in older, simpler devices,” with consumers scooping up Wi-Fi-free iPods, MP3 players, vintage gaming consoles, handheld cameras and more.
Quote:April 15 – As people increasingly turn to artificial intelligence for advice, some US lawyers are telling their clients not to treat AI chatbots like trusted confidants when their freedom or legal liability is on the line.
These warnings became more urgent after a federal judge in New York ruled this year that the former CEO of a bankrupt financial services company could not shield his AI chats from prosecutors pursuing securities fraud charges against him.
In the wake of the ruling, attorneys have been advising that conversations with chatbots like Anthropic’s Claude and OpenAI’s ChatGPT could be demanded by prosecutors in criminal cases or by litigation adversaries in civil cases.
“We are telling our clients: You should proceed with caution here,” said Alexandria Gutiérrez Swette, a lawyer at New York-based law firm Kobre & Kim.
People’s discussions with their lawyers are almost always deemed confidential under US law. But AI chatbots are not lawyers, and attorneys are instructing clients to take steps that could keep their communications with AI tools more private.
In emails to clients and advisories posted on their websites, more than a dozen major US law firms have outlined advice for people and companies to decrease the chances of AI chats winding up in court.
Similar warnings are also appearing in hiring agreements by some firms with their clients. For instance, New York-based firm Sher Tremonte stated in a recent client contract that sharing a lawyer’s advice or communications with a chatbot could erase the legal protection known as attorney-client privilege that usually shields communications between lawyers and their clients.
A judicial ruling
The case that helped set off the alarm bells involved Bradley Heppner, the former chair of bankrupt financial services company GWG Holdings and founder of alternative asset firm Beneficent. Heppner was charged by federal prosecutors last November with securities and wire fraud, and pleaded not guilty.
Heppner had used Anthropic’s chatbot Claude to prepare reports about his case to share with his attorneys, who later argued that his AI exchanges should be withheld because they contained details from the lawyers related to his defense.
Prosecutors argued that they had a right to demand material that Heppner created with Claude because his defense lawyers were not directly involved, and because attorney-client privilege does not apply to chatbots.
Quote:Meta staffers will soon have the option of chatting with a creepy-sounding virtual clone of CEO Mark Zuckerberg, according to a report published Monday.
The AI-powered Zuck will be a “photorealistic” 3D copy of the eccentric executive and is being trained to recreate his mannerisms, tone and even voice, the Financial Times reported, citing people familiar with the matter.
The 41-year-old billionaire is said to be personally involved in building his AI doppelganger, which will be able to spout his publicly available statements and “his own recent thinking on company strategies,” according to the FT.
The project’s goal is reportedly to help employees “feel more connected” to Zuckerberg.
Meta representatives were not quoted in the FT article and did not immediately respond to a Post request for comment.
The Zuckerberg clone is one of multiple AI-powered characters currently in development at Meta, according to the FT. For now, it’s unclear who else will be portrayed.
The digital Zuck initiative drew jeers online, with commenters referring to the tech titan’s well-known history of awkward public appearances.
Using the AI version of Zuckerberg is “probably less weird than engaging with the real version,” one X user quipped.
“This sounds like a horror movie. You’re at work but now you have to run all decisions by robot Zuck,” another wrote.
Quote:Snap shares spiked 7% on Wednesday after billionaire CEO Evan Spiegel revealed plans to slash about 1,000 jobs and rely on artificial intelligence to take over their work.
Spiegel, whose personal fortune is pegged by Forbes at $2.3 billion, said he was “deeply sorry” in a staff memo announcing the cuts, which amount to 16% of the Snapchat parent’s overall workforce.
The company is also closing more than 300 open roles.
“While these changes are necessary to realize Snap’s long-term potential, we believe that rapid advancements in artificial intelligence enable our teams to reduce repetitive work, increase velocity, and better support our community, partners, and advertisers,” Spiegel said in the memo.
“We have already witnessed small squads leveraging AI tools to drive meaningful progress across several important initiatives, including Snapchat+, enhanced ad platform performance, and efficiency improvements in our Snap Lite infrastructure,” Speigel added.
Snap employees in North America were told to work from home on Wednesday following the announcement, with impacted workers learning their fate by email.
The social media firm, which has struggled with intense competition from rivals like Instagram and TikTok, had about 5,261 full-time employees as of the end of last year.
The layoffs came as Snap faced pressure from activist investor Irenic Capital Management, which had pushed the company to streamline its business, according to Reuters.
Irenic advised Snap to either spin off or shut down its “Specs” augmented reality glasses business and enact other cost-cutting moves.
Even after Wednesday’s intraday trading gains, Snap shares were still down about 26% since the start of the year.
Quote:A former Oracle employee accused the tech giant of targeting workers “with outstanding stock options” in a recent round of layoffs — as the company reportedly offered its new chief financial officer a juicy $26 million stock package.
A 30-year Oracle veteran recently took to LinkedIn as the Larry Ellison-led company laid off about 700 workers, with thousands of more cuts potentially in the offing.
“Well, after 30+ years at Oracle, I join the 30,000 or so laid off today. Quite a shock. Many of the absolute best colleagues were laid off as well,” Nina Lewis wrote.
“It seems (BUT I DON’T KNOW), maybe, layoffs follow an algorithm of high level individual contributors and mid-level managers – especially those with outstanding stock options,” she continued.
“Not sure what to do next, if anything. Open to ideas,” Lewis concluded with a smile emoticon.
Laid-off employees immediately forfeited their unvested stock, according to Marketwise, though their vested stock remained accessible.
Oracle declined to comment.
Lewis clarified in a follow-up post that she had “NO specific inside knowledge of any layoff algorithm” but that rumors circulating among employees “appear to match what we see around us as a possible pattern.”
“Again, I have no inside knowledge of any ‘hidden algorithm’, although there must be some system/algorithm if you are laying off 30k people,” she added.
Other former employees voiced similar suspicions on workplace forums like Blind and TheLayoff.com, with some claiming they were laid off shortly before upcoming vesting dates.
Quote:Booking.com phishers could be invading your inbox.
The travel and hotel reservation platform notified customers this past week that their personal information, such as names, email addresses, phone numbers and booking details, may have been compromised in the breach, according to posts on social media.
“We’re writing to inform you that unauthorized third parties may have been able to access certain booking information associated with your reservation,” the notification read, according to a screenshot posted on Reddit.
The message noted that “anything that you may have shared with the accommodation” could have been part of the information stolen.
A spokesperson for the company told The Guardian that “financial information was not accessed.”
Courtney Camp, a Booking.com spokesperson, told TechCrunch that the company “noticed some suspicious activity involving unauthorized third parties being able to access some of our guests’ booking information.”
“Upon discovering the activity, we took action to contain the issue. We have updated the PIN number for these reservations and informed our guests,” she said.
Attackers are using the hotel and messaging systems tied to reservations to send convincing requests and messages to consumers. They often mimic real hotel communication on the booking platform, which can come off convincing.
The Reddit user who initially posted the notification to the subreddit r/Bookingcom told TechCrunch that they had received a phishing message through WhatsApp two weeks ago that included their booking details and personal information.
In some cases, travelers have reported being asked to “reconfirm payment” or “verify identity” shortly before their arrival.
Customers should look out for messages that look legitimate but feel off, urgent payment requests, last-minute confirmation emails or texts that appear to be tied directly to their reservation.
"For God has not destined us for wrath, but for obtaining salvation through our Lord Jesus Christ," 1 Thessalonians 5:9
Maranatha!
The Internet might be either your friend or enemy. It just depends on whether or not she has a bad hair day.