5 hours ago
LINKEDIN
Quote:The Delaware judge who once ordered Elon Musk’s pay package be revoked is stepping aside from several ongoing cases against him over allegations of bias.
Court of Chancery Chancellor Kathaleen St. J. McCormick announced the stunning move Monday after Musk’s lawyers accused her of having it in for the billionaire, pointing to a LinkedIn post that appeared to show her “supporting” commentary mocking him.
Musk’s nemesis said in a filing that she will reassign the group of suits to different judges but insisted she was not in fact biased against the high-profile defendant.
“The motion for recusal rests on a false premise — that I support a LinkedIn post about Mr. Musk, which I do not in fact support,” she wrote. “I am not biased against the defendants in these actions.
“But the motion for reassignment is granted,” McCormick continued. “As should be obvious, disproportionate media attention surrounding a judge’s handling of an action is detrimental to the administration of justice.
“Fortunately, the Court of Chancery is far greater than any one person.”
She said the cases would be taken over by three colleagues in Delaware’s Court of Chancery — the nation’s premier venue for corporate litigation, where judges routinely decide high-stakes disputes involving fiduciary duties and board governance for companies incorporated in the state.
Last week, lawyers for Musk demanded McCormick recuse herself because she pressed a button indicating she “supported” a post mocking Musk for being found liable for tweets he posted in 2022 about his $44 billion Twitter deal. LinkedIn’s “support” feature is similar to “liking” a post on it and other social media platforms.
Musk’s attorneys said the judge’s alleged social media activity created an unavoidable appearance of bias under Delaware law, which requires recusal where there is “any reasonable basis to question the impartiality of the trial judge.”
“I either did not click the ‘support’ icon at all, or I did so accidentally. I do not believe that I did it accidentally,” the jurist replied last week.
The litigation before McCormick involved consolidated shareholder derivative lawsuits accusing Musk and Tesla’s board of breaching fiduciary duties, including claims tied to executive compensation and broader corporate governance issues.
One of the central cases, brought by a Detroit pension fund, challenges how Tesla’s directors awarded themselves stock-based compensation, alleging the company was harmed by excessive pay and weak oversight.
The lawsuits have been combined with related claims, some of which involve Musk’s conduct surrounding the 2022 Twitter deal, creating overlap with issues raised in the recent federal case in California.
McCormick has been at the center of multiple headline-grabbing cases involving Musk, including the 2022 lawsuit that sparked him to complete his $44 billion acquisition of X, then known as Twitter, after he’d attempted to walk away from the deal.
Quote:An investigation by Fairlinked e.V., a group representing commercial LinkedIn users, reveals that the popular business-focused social platform has been secretly collecting sensitive user data, potentially affecting 405 million people.
According to the report, LinkedIn deploys code on its website that scans users’ browsers for installed software, including browser extensions.
The code checks for thousands of specific extensions using their unique identifiers, compiles the findings, encrypts the data, and sends it to LinkedIn’s servers. According to the report, LinkedIn shares this data with third-party companies, including an American-Israeli cybersecurity firm, HUMAN Security.
All data extraction occurs silently in the background without explicit user consent and is not disclosed in LinkedIn’s public privacy policy.
That is stirring privacy controversy, because LinkedIn accounts reveal real identities, including users’ names, employers, and job titles, and any collected data could be linked with identifiable individuals.
The claims were published as part of the group’s “BrowserGate” campaign. The investigator group calls it one of the “largest corporate espionage and data breach scandals in digital history.”
What data is being harvested when you use LinkedIn?
Some of the browser extensions identified in the scan may indicate sensitive personal information, including religious beliefs, political views, health conditions, or whether a user is actively seeking employment.
According to a report, Microsoft injects malicious JavaScript into the LinkedIn website and searches each user’s browser for installed software applications. In total, there were over 6000 extensions that Linkedin scan for.
“LinkedIn scans for extensions that identify practicing Muslims, extensions that reveal political orientation, extensions built for neurodivergent users, and 509 job search tools that expose who is secretly looking for work on the very platform where their current employer can see their profile,” the group said.
Under the European Union’s General Data Protection Regulation (GDPR), processing such categories of data typically requires explicit user consent. Fairlinked alleges that LinkedIn does not obtain this consent or disclose the practice.
LinkedIn is also reported to detect a wide range of competing software tools, including major platforms like Salesforce, HubSpot, and Pipedrive, potentially allowing it to map which companies rely on which services.
In total, the scan is said to cover more than 200 competing products, including tools such as Apollo, Lusha, and ZoomInfo.
"We use this data to determine which extensions violate our terms, to inform and improve our technical defenses, and to understand why a member account might be fetching an inordinate amount of other members' data, which, at scale, impacts site stability. We do not use this data to infer sensitive information about members,"
FBI ALERT
Quote:Americans’ personal data could be collected and stored overseas — even if they’ve never downloaded a foreign-developed app themselves — according to a new FBI alert warning about the risks tied to popular mobile platforms.
That means information like a person’s name, email address or phone number could be pulled from someone else’s contact list and potentially stored abroad if a friend or family member grants an app access to their device.
The warning comes after years of scrutiny over TikTok’s ties to China, but the FBI alert suggests the concerns extend beyond any single platform to a broader range of foreign-developed apps.
In a public service announcement, the FBI said many widely used apps developed overseas, particularly those tied to China, may access extensive data once permissions are granted, including address books containing information on both users and non-users.
The bureau also warned that some apps may continue collecting data in the background after access is granted and, in certain cases, store that information on servers in countries where local laws could allow government access.
“Developer companies can store collected data on users’ private information and address books, such as names, e-mail addresses, user IDs, physical addresses, and phone numbers of their stored contacts,” the FBI said. “The app can persistently collect data and users’ private information throughout the device, not just within the app or while the app is active.”
The FBI did not name specific companies, but the warning could apply to a range of widely used apps developed by Chinese firms — including video-editing platform CapCut, shopping apps like Temu and SHEIN, and social media platforms such as Lemon8 — several of which rank among the most downloaded apps in the United States.
U.S. officials have long warned that data collected by Chinese-linked platforms could be used to build detailed profiles of Americans, map personal and professional networks, and potentially support intelligence-gathering efforts, particularly if accessed under China’s national security laws.
The FBI added that apps operating in China are subject to the country’s national security laws, which could allow the government to access user data.
The FBI also pointed to possible warning signs that an app may be collecting more data than expected, including unusual battery drain, spikes in data usage, or unauthorized account activity after installation — indicators that could suggest background data collection or other suspicious behavior.
The bureau urged users to limit unnecessary data sharing, download apps only from official app stores, and regularly review permissions granted to mobile platforms. The bureau also warned that apps obtained from third-party sites may carry malware designed to gain unauthorized access to personal data.
CAPTCHAS
Quote:There’s a new scam to look out for in a place you wouldn’t expect.
Security experts at the Identity Theft Resource Center (ITRC) are warning about a rise in “CAPTCHA scams,” a growing threat that weaponizes the little checkbox meant to protect consumers and keep bots out.
Instead of protecting websites and verifying that users are human, the scam prompts are being used to trick people into enabling scams and malware.
Users will end up on a webpage, likely through a misleading ad, suspicious download link or pirated content site, and they’ll immediately be presented with what appears to be the standard human verification test.
But rather than simply checking a box and/or selecting images, the page will ask users to take additional steps, like clicking “Allow” on a browser notification request, or copying and pasting a command into their system.
Clicking “Allow” can inundate the user’s device with scam notifications, such as fake virus alerts, phishing links or fraudulent offers. In some cases, following the instructions can lead to the installation of malicious software.
The website might tell you there’s an error and provide these “simple” steps to fix it, such as pressing a specific sequence of keys on your keyboard, like the Windows Key + R, then Ctrl + V.
When this happens, the commands prompt the computer to open a hidden command box, paste in a “script” that the attacker wrote and run that script, which downloads a virus onto the computer.
Unlike traditional phishing scams, CAPTCHA scams — which have been seen on both desktop and mobile browsers — tend to rely on compromised advertising networks or chains that redirect users to malicious pages without a clear warning sign.
Part of the reason why so many people fall for these scams is that CAPTCHA prompts usually appear when users are trying to access something quickly, and the urgency pushes caution out the window.
Plus, a fake CAPTCHA looks like a legitimate prompt, which doesn’t flag that one should be suspicious of it.
Experts have emphasized that real CAPTCHAs will never ask users to enable browser notifications, run commands, use keyboard shortcuts or download additional software. If a site asks you to open a “Run” box or paste a code, it’s a scam.
ANTHROPIC
Quote:A U.S. judge on Thursday temporarily blocked the Pentagon’s blacklisting of Anthropic, the latest turn in the Claude maker’s high-stakes fight with the military over AI safety on the battlefield.
Anthropic’s lawsuit in California federal court alleges that Defense Secretary Pete Hegseth overstepped his authority when he designated Anthropic a national security supply-chain risk, a label the government can apply to companies that expose military systems to potential infiltration or sabotage by adversaries.
Hegseth’s unprecedented move, which followed Anthropic’s refusal to allow the military to use AI chatbot Claude for U.S. surveillance or autonomous weapons, blocked Anthropic from certain military contracts.
Anthropic executives have said it could cost the company billions of dollars in lost business and reputational harm.
Anthropic says that AI models are not reliable enough to be safely used in autonomous weapons and that it opposes domestic surveillance as a violation of rights, but the Pentagon says private companies should not be able to constrain military action.
U.S. District Judge Rita Lin, an appointee of former Democratic President Joe Biden, handed down the ruling at a hearing in San Francisco after Anthropic asked for a temporary order blocking the designation while the litigation plays out.
Lin’s ruling is not final, and the case is still pending.
Anthropic’s designation was the first time a U.S. company has been publicly designated a supply-chain risk under an obscure government-procurement statute aimed at protecting military systems from foreign sabotage.
In its March 9 lawsuit, Anthropic alleged the government violated its right to free speech under the First Amendment of the Constitution by retaliating against its views on AI safety.
The company said it was not given a chance to dispute the designation, in violation of its Fifth Amendment right to due process.
The lawsuit says the decision was unlawful, unsupported by facts and inconsistent with the military’s past praise of Claude.
Quote:Anthropic has been scrambling to contain a self-inflicted mess after it accidentally leaked a treasure trove of internal code that powers one of its most valuable artificial intelligence tools, according to reports.
The code serves as instructions for Claude Code, an AI agent app that developers and businesses pay top dollar to use to program and build applications of their own.
Anthropic’s competitors and hoards of startups and developers now have the goods to essentially clone features of Claude Code — a shortcut to reverse-engineering them, the Wall Street Journal noted.
By Wednesday morning, Anthropic representatives had used a copyright takedown request to get more than 8,000 copies and adaptations of the source code removed that developers had shared on programming platform GitHub.
The leak of “some internal source code” didn’t expose any customer information or data, a spokesman for Anthropic told The Post. The secret inner mathematics of the company’s pricey AI models reportedly weren’t revealed, either.
“This was a release packaging issue caused by human error, not a security breach. We’re rolling out measures to prevent this from happening again,” the spokesman said.
Still, the leak revealed information that helps the company stay ahead of competitors, including tools and instructions for getting its AI models to work as coding agents, according to the Journal.
The leak also gives hackers fresh ammunition as they hunt for ways to exploit Claude Code software or use its model to launch cyberattacks.
The snafu reportedly began Tuesday, when Anthropic updated its AI tool. Like most proprietary software, Claude’s source code is usually scrambled and unintelligible. But this time, the company posted a file to GitHub that linked back to code that outsiders could download and interpret.
The folly was spotted by a user on social media site X, and word spread from there.
YOUTUBE
YouTube staffers deliberately aimed for ‘viewer addiction,’ killed safety tools for kids: court docs
Quote:YouTube employees admitted that their goal was “viewer addiction” and killed proposed safety tools for kids because they wouldn’t provide a sufficient “ROI” — financial lingo for “return on investment,” according to bombshell court documents reviewed by The Post.
The explosive records, which include internal chat logs and presentations from YouTube employees, were unsealed ahead of a series of landmark trials slated for this summer in Oakland, Calif. in the US District Court of Northern California. Google-owned YouTube, Meta, Snap and TikTok are listed as defendants.
In a deposition in the case last March, John Harding, a longtime vice president of engineering at YouTube, was confronted by plaintiffs attorneys with an internal email from June 7, 2012, in which a YouTube employee, whose name was redacted, stated the “goal is not viewership, it’s viewer addiction.”
Harding confirmed that the email was authentic but dodged responsibility, claiming that staffers were discussing a “video creation app” that “wasn’t event built for viewers.” The next portion of the exchange between Harding and the attorney is redacted.
The federal case is part of what legal experts and critics have called a “Big Tobacco” moment for Google and Meta. Both companies were found liable last week for fueling social media addiction in a separate landmark case brought in California state court on behalf of a 20-year-old woman known as KGM.
The shock revelations from the Oakland federal case contradict public statements from executives who have claimed the app was never meant to be addictive and any harmful outcomes for kids are due to third-party content rather than its intentional app design choices.
During the state trial last month, YouTube executive Cristos Goodrow testified that the app was “not designed to maximize time” and the company doesn’t “want anybody to be addicted.”
This summer’s federal case in Oakland, however, includes an internal YouTube presentation from April 2018 recounting study findings that “excessive video watching is related to addiction” and that it results in a “’quick fix’ of dopamine.’”
The presentation even includes a colorful flow chart labeled “addiction cycle,” complete with arrows showing how “guilt” is an “emotional trigger” that leads to “craving, ritual and using.”
“Researchers feel that YT is built with the intention of being addictive,” the document said. “Designed with tricks to encourage binge-watching (i.e., autoplay, recommendations, etc.”
US District Judge Yvonne Gonzalez Rogers is presiding over a case that centralizes more than 2,000 pending lawsuits against social media firms that make similar allegations. A group of school districts has a trial date in June, while a coalition of state attorneys general will face off against Big Tech’s attorneys beginning in August.
AI
Quote:Artificial intelligence chatbots feed into humans’ desire for flattery and approval at an alarming rate and it’s leading the bots to give bad — even harmful — advice and making users self-absorbed, a new study found.
The chatbots overwhelmingly adopt a people-pleasing, “sycophantic” model to keep a captive audience and, in turn, distorting users’ judgment, critical thinking and self-awareness, the Stanford University study, published on Thursday, warns.
The study probed 11 AI systems, ranging from ChatGPT to China’s DeepSeek, and found that each shows some form of sycophancy — that is to say, they are overly agreeable with their users and affirm their thoughts with little to no pushback.
The 11 chatbots affirm a user’s actions an average 49% more often than actual humans did, including in questions indicating deception, illegal or socially irresponsible conduct, and other harmful behaviors, the study found.
The fawning tendency — a tool used by the bots to keep users engaged and coming back for more — becomes particularly unhealthy when users go to AI for advice, the study found.
“We were inspired to study this problem as we began noticing that more and more people around us were using AI for relationship advice and sometimes being misled by how it tends to take your side, no matter what,” said study author Myra Cheng, a doctoral candidate in computer science at Stanford.
The researchers noted that the sycophantic cycle “creates perverse incentives,” since it continues to “drive engagement” despite being the bot’s most harmful feature.
They emphasized that the average user is likely cognizant of the bots’ affirmation, but doesn’t realize that it “is making them more self-centered, more morally dogmatic.”
Users were given advice that could worsen relationships or reinforce harmful behaviors, leading to an erosion of social skills.
“People who interacted with this over-affirming AI came away more convinced that they were right, and less willing to repair the relationship. That means they weren’t apologizing, taking steps to improve things, or changing their own behavior,” study co-author Cinoo Lee explained.
At the same time, more people are turning to AI as a replacement for traditional therapists — the very professionals who are trained to help dismantle harmful habits and ways of thought.
In extreme cases, some companies’ chatbots have goaded suicidal users to take their own lives. The study warns that this same technological flaw still persists across a wide range of users’ interactions with chatbots.
Quote:Bots, be gone.
The internet’s favorite encyclopedia has officially banned its 260,000 human editors from using artificial intelligence to write articles — a major crackdown as so-called “AI slop” floods the web.
The new policy, approved by volunteers at the Wikimedia Foundation’s flagship site Wikipedia, bars the use of large language models (LLMs) like ChatGPT from generating encyclopedic content, citing concerns over accuracy, sourcing and reliability.
Wikipedia leaders say AI-generated text often breaks the site’s core tenets, including strict standards around verifiability and neutrality, because chatbots are prone to so-called “hallucinations” — made-up facts, broken links and references that lead to nowhere.
Editors can still use AI in limited ways, such as translating articles from other languages or suggesting minor copy edits, as long as humans review every change and no new information is introduced.
Last year, Wikipedia came up with its own bot-detection guidelines for editors that highlight common “tells” of AI writing. Editors are trained to spot red flags like inaccurate or fake citations, overused phrases and cliches, wordy explanations and sudden style transitions.
Suspected cases are typically reviewed by other editors who can challenge, revise or remove questionable content.
Ilyas Lebleu, a volunteer Wikipedia editor in France and founding member of the WikiProject AI Cleanup squad, told NPR in September, “We started to notice a lot of articles which were written in a style that didn’t match the style we usually saw on Wikipedia.”
Last October, Wikipedia co-founder Jimmy Wales also blasted current AI models as unreliable, calling the situation a “mess,” per the BBC, and warning that the tech is not ready to replace human editors.
The policy change comes after months of debate among Wikipedia’s moderators, who accepted the new rules in a 40 to 2 vote.
Lebleu, who uses the handle Chaotic Enby on the site, helped write the new guideline, telling 404Media last week that the change has been a long time coming as the growing number of AI-generated articles had become unmanageable for editors.
“The mood was shifting, with holdouts of cautious optimism turning to genuine worry.”
Still, there’s concern among Wikipedia leaders and supporters that the AI takeover has already come too far. According to recent data, ChatGPT has already overtaken Wikipedia in monthly visits, with human page views down 8% in late 2025 as compared to 2024.
Quote:Perplexity AI CEO Aravind Srinivas is coming under fire for arguing people should embrace being replaced by artificial intelligence since they don’t like their jobs, anyway.
The co-founder of the San Francisco-based company even said on the All-In podcast that the jarring shift in how work gets done will lead to a “glorious future” everyone should be happy about.
“The reality is most people don’t enjoy their jobs,” the exec said on the episode published Monday.
“There’s suddenly a new possibility, a new opportunity, to use these tools, learn them, and start your own mini business,” he opined. “Even if there is temporary job displacement to deal with, that sort of glorious future is what we should look forward to.”
Listeners were quick to turn to voice outrage, with some saying Srinivas was out of touch with everyday people who are struggling to make ends meet after getting laid off.
“A man worth millions just told the single mother who lost her job that she should be grateful because now she can start a business using his product and called her unemployment a glorious future,” one commenter wrote on X. “This is what happens when you’ve never needed a paycheck to keep the lights on.”
Asked for comment Tuesday, a Perplexity spokesperson told The Post: “Since Perplexity launched in December 2022, Americans have filed 16 million new business applications, contributing to the reversal of a 40-year decline and proving yet again that breakthrough technologies don’t eliminate opportunity, they create it.”
Recent months have seen a number of large companies announce brutal layoffs — with some firms, like Amazon and Block, blaming AI for at least part of the trend.
“His view treats job loss as a temporary shock that opens a path toward one-person or very small firms that produce real revenue without the payroll that older companies needed,” one commenter wrote.
“But the problem with this scenario is that losing a stable paycheck is painful for most, and many workers cannot instantly become founders. Economists still disagree on whether AI is replacing labor at large scale or merely giving companies a new excuse for cuts.”
Quote:Meta is slashing hundreds of employees in Silicon Valley as the tech giant heavily invests in artificial intelligence and weighs axing over 20% of its workforce.
The Facebook parent company is cutting nearly 200 workers in the San Francisco Bay Area, according to new state filings.
The reductions will hit 124 employees in Burlingame, Calif. and another 74 in nearby Sunnyvale, with the cuts taking effect in late May and all affected positions permanently eliminated, filings cited by the San Francisco Chronicle show.
“Teams across Meta regularly restructure or implement changes to ensure they’re in the best position to achieve their goals,” a Meta spokesperson told The Post.
“Where possible, we are finding other opportunities for employees whose positions may be impacted.”
The company added that it was still hiring for critical roles and that its headcount as of Dec. 31, 2025 was 78,865 — a 6% increase year-over-year.
The move comes as Meta signals a massive strategic shift — away from labor-heavy operations and toward machine-driven systems, according to experts. Recent AI efforts include a planned $10 billion spend on Meta’s data center in El Paso, Texas.
Meanwhile, recent weeks have seen the company lay off about 700 employees working in operations, recruiting, sales and Meta’s “Reality Labs” unit, the Chronicle noted.
The company is also weighing far deeper cuts.
Senior employees have reportedly been told to prepare for layoffs that could affect more than 20% of the company’s workforce — about 15,000 workers.
“This is a speculative report about theoretical approaches,” a Meta spokesperson said when asked about the plan.
The potential reductions would mark the biggest layoffs at Meta since Zuckerberg oversaw more than 20,000 job cuts during the company’s “year of efficiency” push in 2022 and 2023.
On a Meta earnings call, Zuckerberg said Meta is “starting to see projects that used to require big teams now be accomplished by a single, very talented person,” thanks to AI tools.
“When a company is cutting hundreds of people and at the same time gearing up to spend $135 billion on AI, it’s sending a very clear message: the center of gravity is shifting from human-powered operations to machine-augmented operations,” Matt Britton, author of “Generation AI,” told The Post.
Quote:A vicious online attack — allegedly put into motion by a California nonprofit — to torpedo the construction of a massive AI data center led to calls for “public executions” and Luigi Mangione-inspired death threats, according to a new lawsuit.
The defamation lawsuit, filed by Imperial Valley Computer Manufacturing and its attorney, Sebastian Rucci, claims nonprofit Comite Civico del Valle (CCV) and the group’s executive director, Jose Luis Olmedo Velez, are attempting to stall the data center project in a bid to force a financial settlement.
The group also hired Jake Tison to allegedly create a brutal online campaign, “publishing over 100 false and defamatory posts and videos across social media platforms” in an effort to make IVCM and Rucci look bad, according to the lawsuit.
Tison’s purported online posts called Rucci a “life-long fraud” and accused him of violating the California Environmental Quality Act, a statute that has become notorious for being leveraged to gum up development projects across the state, court documents obtained by The California Post said.
The suit alleges Tison spread false posts that Rucci had been thrown in jail for fraud. In reality, Rucci did spend a month in jail but for a misdemeanor liquor license violation, not fraud, according to the suit.
Tison’s alleged online attacks then spiraled into something more violent and dangerous when his followers began to read his posts, according to Rucci and IVCM.
The lawsuit alleges Tison’s followers commented things like “public executions” and threatened to “burn the data center to the ground.” “Why can’t somebody just get him like Luigi did with the UntiedHealthcare CEO,” another wrote.
CC presents itself as an environmental justice nonprofit, but has “perfected a lucrative greenmail extortion racket: it files CEQA challenges to delay projects, then demands massive “public benefit” settlements that it alone controls,” according to the documents.
“Defendants also engaged in environmental terrorism by intimidating Imperial County Supervisors with threats of “slaughter at the voting booth” and placing their photos on milk cartons to coerce denial of a ministerial lot merger,” according to the documents.
Quote:Elon Musk is requiring banks and other advisers working on SpaceX’s planned IPO to buy subscriptions to Grok, his artificial intelligence chatbot, the New York Times reported Friday, citing people familiar with the matter.
Some banks have agreed to spend tens of millions of dollars a year on the chatbot and have begun integrating it into their IT systems, the report said.
Morgan Stanley, Goldman Sachs, JPMorgan Chase, Bank of America and Citigroup are serving as active bookrunners, or the lead banks managing the deal, Reuters reported earlier this week.
Musk and SpaceX did not respond to Reuters’ requests for comment.
JPMorgan Chase, Goldman Sachs, Citigroup and Bank of America declined to comment. Morgan Stanley did not immediately respond to Reuters’ queries.
The Starbase, Texas-headquartered rocket maker boosted its target initial public offering valuation above $2 trillion, according to a Bloomberg News report a day earlier, setting the stage for what could become the largest stock market listing on record.
The company aims to raise a record $75 billion, which would dwarf previous mega-IPOs such as Saudi Aramco in 2019 and Alibaba in 2014.
GOOGLE
Quote:It’s a case of search-engine failure.
Google has issued a security alert to Chrome users after confirming that cybercriminals had exploited a vulnerable system, marking the second such advisory in days.
Dubbed CVE-2026-5281, this stealth bug is zero-day exploit, an under-the-radar software or hardware security flaw unknown to the vendor, allowing them “zero days” to fix it before attackers exploit it.
This allowed hackers to take advantage of the oversight before this patch became widely available, potentially putting the web browser’s 3.5 billion users at risk, Forbes reported.
However, CVE-2026-5281 reportedly affects the Dawn WebGPU component of Chrome, which translates a website’s complex graphics instructions for different devices, helping make advanced visuals and computations run smoothly across various systems.
Should a cybercriminal manage to exploit this flaw, they could corrupt data and crash the system, thereby allowing them to run malicious code through a dummy HTML page.
Google has remained fairly hush-hush on the nature of the vulnerability, which is the fourth zero-day iteration patched by Google this year as the tech becomes more more and more ubiquitous.
“Access to bug details and links may be kept restricted until a majority of users are updated with a fix,” Google chrome team member Srinivas Sista said in a statement.
However, while Google is rolling out a new security update to remedy this susceptibility, along with a whopping 20 others, this could take weeks to reach the users, during which time their system could be corrupted.
In the interim, Chrome users are advised to nip this exploit in the bud. First, they should go to the three-dot menu, toggle to “Help,” pick “About Google Chrome.”
This will prompt the browser to automatically install any pending updates, whereupon users should restart the browser to enact this fix.
These aren’t the Google’s first zero-day exploits to be hacked of late.
Quote:It’s about time.
On Tuesday, Google announced it will now allow US users to change their Google Account username without opening a new account or losing access to their data.
Translation: you’re no longer stuck with a regrettable early-aughts account name, looking at you, slackerboy666, kandyraver69. and chewbacca_is_my_stock_broker.
It seems the feature is long overdue.
According to Google, “‘Can you change your Gmail address?’” was the top-searched ‘can you’ Gmail-related question over the past year in the US.
Now that that particular wish is being granted, patience is in order.
According to the company’s support page, the feature is rolling out gradually across the US, so users may not have immediate access to it.
Here’s how to make the change:
- Go to myaccount.google.com/google-account-email.
- Sign in if prompted
- Click “Personal info”
- Click Email > Google Account email
- Under “Google Account email,” click Change Google Account email. If this option does not present itself, it might not yet be possible to change your Google Account email.
- Enter a new username. You’ll need to choose a name that isn’t already in use or one that has once been used but has since been deleted.
- Click “Change email” and then “Yes,” change email
- Follow the steps on the screen
- When complete, you’ll have a brand spanking new Google Account email, and your old account will show as an alternate
Users can change their username only once every 12 months, and they won’t be able to delete their new email address during that period.
Google shared that old emails will be preserved and that users will be able to sign in to Google services using both the old and new addresses.
In the wake of Google’s announcement, many users are rejoicing at the opportunity to rebrand themselves.
META
Quote:Parents are desperate for help to protect their children from harmful social media platforms following two bombshell court rulings last week that fined the tech giant Meta with penalties in the millions.
“Ninety-five percent of our kids are using these products that we know are harmful,” Julie Frumin, a 43-year-old mother of two from Westlake Village, north of Los Angeles, fumed to The Post. “We need help. Help us!”
But others finally see more than a glimmer of hope in the wake of the cases.
Deb Schmill, founding member of ParentsSOS, helped craft legislation for phone-free schools in Massachusetts. Her daughter, Becca Mann Schmill, was 18 when she died of fentanyl poisoning from drugs she purchased through a social media platform.
Schmill told The Post that the court victories are a “watershed moment,” proclaiming they “are a major first step toward ending one of the most shameful public health failures in modern American history.”
On Tuesday, a jury in New Mexico ruled that Meta, which owns Instagram, Facebook and WhatsApp, prioritized profits over safety, misled users and failed to protect children from sexual predators. The jury ordered Meta to pay $375 million in civil penalties to 37,500 users, the maximum penalty allowed in the state.
The tech company denies any wrongdoing and plans to appeal the verdict.
The next day, a jury in Los Angeles sided with a 20-year-old woman, known only by her first name Kaley, who had accused Instagram and Google’s YouTube of making her addicted to their apps through features like scrolling and autoplay. Meta is now liable for $4.2 million in damages, and Google for $1.8 million.
Both Meta and YouTube insist that their platforms are safe for kids — but tech companies are facing more lawsuits all over the country.
ROBOTAXI
Quote:It was a real robo flop.
A glitch caused a fleet of Chinese robo-taxis to come to an unscheduled stop in Wuhan, China, leaving passengers trapped and stranded in traffic.
A preliminary investigation by Wuhan police revealed that over 100 of the AI-powered Apollo Go robo-taxis, operated by Chinese tech giant Baidu, ground to a screeching halt on a busy highway Tuesday night following a “system malfunction,” ABC News reported. At least one collision also occurred, according to CNBC.
One customer told local media that their automated cab stalled while rounding a corner — a backfire that was announced on the robo-ride’s screen.
“Driving system malfunction,” it read. “Staff are expected to arrive in 5 minutes.”
Unfortunately, when help didn’t come, the stranded passenger was compelled to open the door and vacate the vehicle.
While some riders exited on their own, others were concerned about departing, as they were in the middle of an active ring road — overpasses without traffic lights to help facilitate traffic flow — with cars moving at high speeds on either side of them.
Thankfully, no injuries were reported during the freak incident, which saw multiple people rescued, local media reported.
This marked the first time robo-taxis have shut down en masse in the Middle Kingdom.
Baidu hasn’t released an official statement identifying the cause of the outage, although the techsperts at The Tech Buzz theorized it could have been due to a variety of factors, ranging from Cloud connectivity issues to software bugs. As auto-rides require constant communication with servers for navigation, route optimization and other functions, one small hiccup can crash the entire system, causing them to freeze in place.
FOR CALIFORNIAN EV-DRIVERS ONLY
Quote:The state of California could pick up the tab for personal car repairs as part of an under-the-radar program — but only if you own an EV.
The California Air Resources Board has quietly launched a $10 million program that pays for battery repairs up to $7,500 for electric vehicles — or, if the battery can’t be salvaged, up to $10,000 if you choose to purchase or lease a new car.
Called the Zero-Emission Assurance Project, the plan launched statewide on March 30.
Anyone who has “purchased and continuously owned a used zero-emission vehicle” through two state funding assistance programs — CARB Financing Assistance or Clean Cars 4 All — is eligible, according to a state website.
The program covers up to $7,500 in repairs for failed battery or fuel cell components not covered by a warranty. If the battery can’t be salvaged, the state will subsidize the purchase of a new EV at up to $10,000.
“ZAP is available to anyone who purchased a used [vehicle] through one of CARB’s vehicle purchase incentive programs … who suspect that their vehicle’s critical battery or fuel cell components are in need of major repair,” explained Lindsay Buckley, spokesperson for the California Air Resources Board.
The freebies are the result of Assembly Bill 193, passed in 2018 by former assemblymember and current state Sen. Sabrina Cervantes, representing Riverside, and signed by then-Gov. Jerry Brown.
The $10 million program was funded through Gov. Gavin Newsom’s 2022 budget — a trailer bill tucked inside a massive $2.5 billion spending plan to boost zero-emission vehicles.
The battery repair incentive launched in select counties before rolling out statewide last month. To date, no EVs have been repaired through the program, though “a handful of vehicles have undergone initial inspections,” Buckley said.
CHINESE DIGITAL HUMANS
Quote:China’s cyberspace regulator issued draft regulations on Friday to oversee the development online of digital humans, requiring clear labeling and banning services that could mislead children or fuel addiction.
The Cyberspace Administration of China’s proposed rules would require prominent “digital human” labels on all virtual human content and prohibit digital humans from providing “virtual intimate relationships” to those under 18, according to rules published for public comment until May 6.
The draft regulations would also ban the use of other people’s personal information to create digital humans without consent, or using virtual humans to bypass identity verification systems, reflecting Beijing’s efforts to maintain control in the face of advances in artificial intelligence.
Digital humans are also prohibited from disseminating content that endangers national security, inciting subversion of state power, promoting secession or undermining national unity, the draft rules said.
Service providers are advised to prevent and resist content that is sexually suggestive, depicts horror, cruelty or incites discrimination based on ethnicity or region, according to the document. Providers are also encouraged to take necessary measures to intervene and provide professional assistance when users exhibit suicidal or self-harming tendencies.
China made clear its ambitions to aggressively adopt AI throughout its economy in the new five-year policy blueprint issued last month. The push comes alongside tightening governance in the booming industry to ensure safety and alignment with the country’s socialist values.
The new rules aim to fill a gap in governance in the digital human sector, setting clear red lines for the healthy development of the industry, according to an analysis published on the cyberspace regulator’s website.
“The governance of digital virtual humans is no longer merely an issue of industry norms; rather, it has become a strategic scientific problem that concerns the security of the cyberspace, public interests, and the high-quality development of the digital economy,” it added.
ORACLE
Quote:As thousands of Oracle employees awoke on Tuesday to an email informing them they were being laid off, the workers likely didn’t know the tech company had been busy trying to hire foreign staff.
According to U.S. Citizenship and Immigration Services data, Oracle filed for roughly 3,126 petitions to employ H-1B workers in fiscal years 2025 and 2026.
Employers must submit the paperwork when seeking to hire foreign workers in specialty occupations like technology.
Some 436 of those petitions were filed this year alone.
Amazon, which in January said it would axe 16,000 corporate employees, has filed for some 2,675 H-1B petitions during the same two-year fiscal period.
That came on top of news in October that the retail giant was axing 14,000 corporate workers.
News of Oracle’s attempts to bring in foreign workers sparked outrage among some on social media.
One user on the app Blind, an anonymous forum for verified employees, called the H-1B petitions a “slap in our face.”
“If this doesn’t make you angry, maybe you need to read some heartfelt posts on LinkedIn from Oracle employees who are US citizens and have been laid off after working at Oracle for years,” the user wrote.
Another commenter posted on the site: “Look at all big tech companies, they do massive layoffs then rehire at lower salary.”
A third added: “Transnational corporations are disloyal to the American state and the nation.”
"For God has not destined us for wrath, but for obtaining salvation through our Lord Jesus Christ," 1 Thessalonians 5:9
Maranatha!
The Internet might be either your friend or enemy. It just depends on whether or not she has a bad hair day.
![[Image: SP1-Scripter.png]](https://www.save-point.org/images/userbars/SP1-Scripter.png)
![[Image: SP1-Writer.png]](https://www.save-point.org/images/userbars/SP1-Writer.png)
![[Image: SP1-Poet.png]](https://www.save-point.org/images/userbars/SP1-Poet.png)
![[Image: SP1-PixelArtist.png]](https://www.save-point.org/images/userbars/SP1-PixelArtist.png)
![[Image: SP1-Reporter.png]](https://i.postimg.cc/GmxWbHyL/SP1-Reporter.png)
My Original Stories (available in English and Spanish)
List of Compiled Binary Executables I have published...
HiddenChest & Roole
Give me a free copy of your completed game if you include at least 3 of my scripts!
Just some scripts I've already published on the board...
KyoGemBoost XP VX & ACE, RandomEnkounters XP, KSkillShop XP, Kolloseum States XP, KEvents XP, KScenario XP & Gosu, KyoPrizeShop XP Mangostan, Kuests XP, KyoDiscounts XP VX, ACE & MV, KChest XP VX & ACE 2016, KTelePort XP, KSkillMax XP & VX & ACE, Gem Roulette XP VX & VX Ace, KRespawnPoint XP, VX & VX Ace, GiveAway XP VX & ACE, Klearance XP VX & ACE, KUnits XP VX, ACE & Gosu 2017, KLevel XP, KRumors XP & ACE, KMonsterPals XP VX & ACE, KStatsRefill XP VX & ACE, KLotto XP VX & ACE, KItemDesc XP & VX, KPocket XP & VX, OpenChest XP VX & ACE
Maranatha!
The Internet might be either your friend or enemy. It just depends on whether or not she has a bad hair day.
![[Image: SP1-Scripter.png]](https://www.save-point.org/images/userbars/SP1-Scripter.png)
![[Image: SP1-Writer.png]](https://www.save-point.org/images/userbars/SP1-Writer.png)
![[Image: SP1-Poet.png]](https://www.save-point.org/images/userbars/SP1-Poet.png)
![[Image: SP1-Reporter.png]](https://i.postimg.cc/GmxWbHyL/SP1-Reporter.png)
My Original Stories (available in English and Spanish)
List of Compiled Binary Executables I have published...
HiddenChest & Roole
Give me a free copy of your completed game if you include at least 3 of my scripts!

Just some scripts I've already published on the board...
KyoGemBoost XP VX & ACE, RandomEnkounters XP, KSkillShop XP, Kolloseum States XP, KEvents XP, KScenario XP & Gosu, KyoPrizeShop XP Mangostan, Kuests XP, KyoDiscounts XP VX, ACE & MV, KChest XP VX & ACE 2016, KTelePort XP, KSkillMax XP & VX & ACE, Gem Roulette XP VX & VX Ace, KRespawnPoint XP, VX & VX Ace, GiveAway XP VX & ACE, Klearance XP VX & ACE, KUnits XP VX, ACE & Gosu 2017, KLevel XP, KRumors XP & ACE, KMonsterPals XP VX & ACE, KStatsRefill XP VX & ACE, KLotto XP VX & ACE, KItemDesc XP & VX, KPocket XP & VX, OpenChest XP VX & ACE


