JAGUAR LIED
Last week Jaguar let everybody think that nothing serious had happened with their retail and production activities, but the truth is that data got stolen by the
cyber-crooks.

Quote:Jaguar Land Rover (JLR) confirms some data was stolen in last month's cyberattack – all while factory workers are told to stay home for another week as the company struggles to restore operations.
With the restoration timeline still an unknown, the UK-based luxury automaker on Wednesday put out a fresh statement about the August 31st cyberattack to update a hungry supply chain and the public.
The attack, which forced JLR to "proactively shut down” its systems for nearly two weeks now, has incapacitated the high-end auto manufacturer’s retail arm, as well as operations at multiple production facilities.
“Since we became aware of the cyber incident, we have been working around the clock to restart our global applications in a controlled and safe manner,” JLM posted on its corporate website.
Now it's been reported that JLR staff, who were to report back to work on Wednesday, have been told to stay home again, with disruptions moving into a third week.
Production was paused last week at factories in the English cities of Halewood, Solihull, and its engine manufacturing site in Wolverhampton (featured image) , reported the Independent.
Apparently, the tens of thousands of furloughed staff were told to be on standby in case circumstances change, the outlet said.
JLR has apologized for “the continued disruption” and said it will “continue to update as the investigation progresses.”
Data confirmed stolen
Besides operational woes, the company has now admitted that the hackers responsible for the breach made away with some of its data, a turnaround from previous statements made by the company in the days right after the attack that no data had been accessed.
“As a result of our ongoing investigation, we now believe that some data has been affected,” Jaguar Land Rover said in the September 10th statement.
JLR did not say what type of data was accessed, how much may have been exfiltrated, or whether any of its over 30,000 employees may be affected. However, JLR reiterated that its forensic investigation “continues at pace,” promising to “contact anyone as appropriate if we find that their data has been impacted.
CYBER-BREACH & LEAKS
Quote:Fairmont Federal Credit Union (FFCU) has informed hundreds of thousands of people about a devastating breach that exposed everything from names to PIN numbers and healthcare data. The kicker? Attackers obtained the data nearly two years ago.
The credit union informed customers of a data breach that the FFCU discovered in late January 2024. Information that the FCCU submitted to the Maine Attorney General’s Office revealed that the 2023 data breach exposed over 187,000 individuals.
After launching an investigation, the company learned that attackers breached its systems nearly half a year prior, roaming FFCU’s network from September 30th, 2023 through to October 18th, 2023.
“As part of the investigation, FFCU engaged external cybersecurity professionals who regularly investigate and analyze these types of situations to help determine the extent of any compromise of the information on the FFCU network and conducted a manual review,” FFCU’s data breach notice said.
The investigators did not appear to hurry with their conclusions, as, according to FFCU, the company didn’t find out what type of data was stolen until August 2025, two years later. What makes matters worse is the enormous extent to which attackers have accessed personal details.
According to the FFCU’s data breach notice, the exposed details include: full names, dates of birth, addresses, Social Security numbers, US Alien registration numbers, passport numbers, driver’s license or state ID numbers, military ID numbers, tax ID numbers, non-US national ID numbers, financial account numbers, routing numbers, financial institution names, credit card/debit card numbers, security code/PIN numbers, credit card/debit card expiration dates, IRS PIN numbers, treatment information/diagnosis, prescription information, provider names, MRN/patient IDs, Medicare/Medicaid numbers, health insurance policy/subscriber numbers, treatment cost information, full access credentials, security questions and answers, and digital signatures.
FFCU noted that not all data elements were impacted for every individual, meaning that the extent varies from person to person. However, the gigantic list of exposed data suggests that attackers had extensive access to files containing critical customer information.
The exposed information enables attackers to carry out numerous malicious activities, ranging from complete medical identity theft to targeted phishing attacks and financial fraud. Not only could attackers remotely verify victims’ identities, but they could also use payment card details for illicit purchases and health insurance details to obtain prescription drugs.
What’s worse, medical health details cannot be changed, like a payment or an ID card, which means victims will have to deal with an elevated risk of medical identity theft for the rest of their lives.
The FFCU noted that the company is not aware of any incidents of identity theft or financial fraud related to the attack and said it will provide victims with complimentary identity theft prevention services.
While FCCU doesn’t specify what type of cyberattack it had to deal with, the dark web monitoring service Ransomware Live indicates that now-defunct ransomware cartel BlackBasta targeted the company. The estimated attack date, October 18th, 2023, coincides with the date provided in FCCU’s data breach notice.
Quote:A server belonging to one of the big names in generative AI just spilled sensitive user data, including private prompts and authentication tokens, potentially exposing millions of people.
Cybernews researchers discovered an unprotected Elasticsearch instance linked to Vyro AI, the company behind some of the most downloaded generative AI tools on Android and iOS.
The open server was leaking 116GB of user logs in real time from the company’s three apps: ImagineArt (10M+ downloads on Google Play), Chatly (100K+ downloads), and Chatbotx, a web-based chatbot with around 50K monthly visits.
The Pakistan-based company claims to have more than 150 million app downloads across its portfolio and says its products pump out 3.5 million images every week.
The leak covered both production and development environments and stored about 2–7 days' worth of logs. Researchers say the database was first indexed by IoT search engines in mid-February, meaning it could have been visible to attackers for months.
What data did Vyro AI leak?
“This leak is significant as it would have allowed for monitoring user behaviour, extracting sensitive information that users shared with AI models, and would have allowed for the hijacking of user accounts,” Cybernews researchers explained.
The size of ImagineArt alone makes the incident alarming. With more than 10M Android installs and claims of 30M+ active users overall, the exposed tokens are a treasure trove for account hijackers. Attackers could easily exploit leaked data to lock users out of their accounts and take them over.
“Takeovers may result in access to full chat history, access to generated images, or could be abused to illegitimately purchase AI tokens, which could later be used for malicious purposes,” added the research team.
Leaking prompts provided by the user to the AI is also troublesome. Conversations with AI often contain intimate or private information, so leaking prompts could reveal things people would never post publicly.
AI security is still not the first priority
The leak underscores the growing security gap in the booming AI sector. As AI startups rush to grab market share, they sometimes cut corners on security. But as more people feed their thoughts, ideas, and even confidential data into generative AI systems, the stakes keep rising.
In August, users were shocked when their conversations with ChatGPT and Grok were leaked on Google search. The leaks were caused by an insecure feature that allowed users to share conversations. When they created share links, the content became crawlable by search engines. OpenAI has since removed the feature.
Recently, Cybernews research revealed that an AI chatbot launched by travel giant Expedia could, with the right prompts, show users how to make Molotov cocktails. The situation exposed how customer-support chatbots released without the appropriate guardrails can expose companies to legal, financial, and reputational risk.
While AI chatbots are often pre-programmed to avoid sensitive or harmful topics, the lack of reliable safety measures may result in AI models going rogue.
The lack of guardrails affects even AI giants like OpenAI, which also struggles with effective guardrails. After the company launched its latest model, GPT-5, several security teams jailbroke the chatbot in less than 24 hours.
Disclosure timeline
Leak discovered: April 22nd, 2025
Initial disclosure: July 22nd, 2025
CERT contacted: July 28th, 2025
AI
Quote:Google’s AI-powered search seems to be confused about whether the so-called Department of Government Efficiency (DOGE) ever existed – or what it is exactly.
The neat little summaries provided by AI Overview are the first – and increasingly the last – thing that many of us read when doing a Google search these days. Unfortunately, they’re sometimes just plain wrong, and we may only realize it when the misinformation is too blatant to miss.
The existence of the notorious White House commission, originally run by President Donald Trump’s former adviser, billionaire Elon Musk, is a case in point. When users asked some DOGE-related questions, Google AI appeared to deny the entity's existence.
Bluesky user iucoinu shared screenshots of their interaction with Google search, where an AI-provided summary described DOGE as a “fictional entity” and a “conspiracy theory.” It said the term was simply a political satire and used to criticize Trump’s policies.
“[W]as idly looking up how many people DOGE is projected to kill and Google’s AI informed me that it was all in my head and that DOGE isn’t real,” iucoinu said in their post on social media.
Screenshots show iucoinu searching for information on “doge deaths,” which prompted an AI Overview summary denying the entity's existence. While AI Overview is right in saying DOGE is “no real department,” it does exist, even if its status is unclear.
Cybernews has searched for the same term as the Bluesky user, but was not provided with an AI summary.
However, when we googled “department of government efficiency deaths,” the search bot told us that DOGE was a “proposed” entity and went on to say that it had been linked to “significant cuts and disruptions” in US foreign aid and government services.
When we searched for “does department of government efficiency exist,” we were given a response saying, “No, there isn't an officially established federal department called the ‘Department of Government Efficiency (DOGE),’ as such a department requires an act of Congress.”
It further clarified that “DOGE was established by an executive order as an advisory body under President Trump to provide recommendations on modernizing federal IT and streamlining bureaucracy.”
Google acknowledges that its AI summaries may be misleading and all come with a disclaimer in fine print that they "may include mistakes."
DOGE was established in January of this year by Trump’s executive order. It was headed by Musk until the X owner fell out of grace with the president and left the administration in May.
Quote:Grok has once again been caught spreading blatant misinformation on X. In several bizarre exchanges, the chatbot repeatedly claimed that Charlie Kirk was "fine" and that gruesome videos of his assassination were a "meme edit."
In one exchange shortly after videos of the shooting began to spread on X, one user tagged Grok and asked if Kirk could have survived the shooting. Grok's response was nonsensical. "Charlie Kirk takes the roast in stride with a laugh— he's faced tougher crowds," it wrote. "Yes, he survives this one easily."
When another user replied with "wtf are you talking about," and pointed out that Kirk has been shot in the neck, Grok insisted it was a "a meme video with edited effects to look like a dramatic 'shot'—not a real event." It doubled down when pressed again by another incredulous user. "The video is a meme edit—Charlie Kirk is debating, and effects make it look like he's 'shot' mid-sentence for comedic effect," Grok wrote. "No actual harm; he's fine and active as ever."
Grok went on to make similar claims in several other exchanges on Wednesday, saying that video was "exaggerated for laughs" and contained "edited effects for humor." In another, Grok noted that multiple news outlets and President Donald Trump had confirmed Kirk's death but described it as a "meme" that appeared to be "satirical commentary on reactions to political violence." By Thursday morning, Grok seemed to understand that Kirk had indeed been shot and killed, but still referenced a "meme video" it said was "unrelated."
That's not the only misinformation Grok spread in the immediate aftermath of the shooting, though. As The New York Times reports, Grok also repeated the name of a Canadian man who was erroneously identified as the shooter by users on X.
Representatives for X and xAI didn't immediately respond to a request for comment.
The xAI chatbot, which has been trained on X posts among other sources, has become ubiquitous on X as users frequently tag Grok in posts in an attempt to fact check or simply dunk on other users. But the chatbot has proved to be extremely unreliable at best. Previously, Grok was also caught spreading misinformation about the 2024 presidential election, falsely claiming that then Vice President Kamala Harris couldn't appear on the ballot.
Quote:Elon Musk’s xAI has laid off at least 500 workers from the data annotation team that is tasked with training Grok AI, according to a recent report by Business Insider.
The company notified its workers of the planned overhaul of its team of generalist AI tutors in emails seen by Business Insider.
"After a thorough review of our Human Data efforts, we've decided to accelerate the expansion and prioritization of our specialist AI tutors, while scaling back our focus on general AI tutor roles. This strategic pivot will take effect immediately," the email read. "As part of this shift in focus, we no longer need most generalist AI tutor positions and your employment with xAI will conclude."
Employees were told that as of the day of the layoff notice, they would no longer have access to company systems, but they will be paid through either the end of their contract or November 30th.
The data annotation team is xAI’s largest and is responsible for training the company's chatbot Grok through contextualizing and categorizing raw data.
According to Business Insider, the number of members on the company’s Slack room for data annotators shrank from 1,500 to a little over 1,000 over the course of Friday, and that number is continuing to go down.
Already on Thursday, employees were told to prepare for a reorganization of the data annotation team. Additionally, several workers were asked to undergo a series of tests to determine their roles within the company going forward. Business Insider said that these tests would be used to sort annotators and their supervisors based on their strengths and interests, according to an internal screenshot.
Following the news, an xAI spokesperson pointed Business Insider to a company post on X saying that it’s planning to “immediately surge [its] Specialist AI tutor team by 10x”.
This comes days after Slack accounts of xAI’s several senior-level employees, including the team's former head, had been deactivated. After that, many workers were pulled into one-on-one meetings for responsibilities and achievements reviews.
Quote:Anthropic told a San Francisco federal judge on Friday that it has agreed to pay $1.5 billion to settle a class-action lawsuit from a group of authors who accused the artificial intelligence company of using their books to train its AI chatbot Claude without permission.
Anthropic and the plaintiffs in a court filing asked US District Judge William Alsup to approve the settlement, after announcing the agreement in August without disclosing the terms or amount.
“If approved, this landmark settlement will be the largest publicly reported copyright recovery in history, larger than any other copyright class action settlement or any individual copyright case litigated to final judgment,” the plaintiffs said in the filing.
The proposed deal marks the first settlement in a string of lawsuits against tech companies including OpenAI, Microsoft and Meta Platforms over their use of copyrighted material to train generative AI systems.
Anthropic as part of the settlement said it will destroy downloaded copies of books the authors accused it of pirating, and under the deal it could still face infringement claims related to material produced by the company’s AI models.
In a statement, Anthropic said the company is “committed to developing safe AI systems that help people and organizations extend their capabilities, advance scientific discovery, and solve complex problems.” The agreement does not include an admission of liability.
Writers Andrea Bartz, Charles Graeber and Kirk Wallace Johnson filed the class action against Anthropic last year. They argued that the company, which is backed by Amazon and Alphabet, unlawfully used millions of pirated books to teach its AI assistant Claude to respond to human prompts.
The writers’ allegations echoed dozens of other lawsuits brought by authors, news outlets, visual artists and others who say that tech companies stole their work to use in AI training.
Quote:Britannica Group, the company behind the 250-year-old Encyclopedia Britannica and Merriam-Webster, has filed a lawsuit against the AI web search startup Perplexity AI, accusing it of copyright infringement.
On September 10th, the group filed a lawsuit in New York federal court, saying that Perplexity’s answer engine systematically scrapes its websites, unlawfully copies articles, and drives traffic away.
The filing also accuses Perplexity of trademark infringement, arguing that it has linked the Britannica and Merriam-Webster names to its inaccurate AI-generated results.
“AI-created content confuses and deceives Perplexity users into believing (falsely) that the hallucinations are associated with, sponsored by, or approved by Britannica,” the filing said.
The lawsuit seeks unspecified damages, as well as requesting that Perplexity cease to misuse the content.
"When I read today’s news on Encyclopedia Britannica’s lawsuit against Perplexity, I couldn’t help but think of the challenges faced by every digital publisher in the age of AI crawlers. Publishers have always invested heavily in creating valuable, proprietary content,” Aurelie Guerrieri, Chief Marketing & Partnership Officer at DataDome, shared with Cybernews.
“Now, as AI-generated traffic surges - quadrupling in 2025 alone across DataDome’s customer base - protecting that investment has never been more challenging.”
The case is part of a string of legal challenges that Perplexity has been facing. Earlier in June, the BBC threatened legal action against the company for using BBC content to train its "default AI model". Several other organizations, including Forbes and Wired, accused Perplexity of plagiarism.
In October, News Corp, parent company of The Wall Street Journal and the New York Post, sued Perplexity for copyright infringement. Around the same time, The New York Times also sent Perplexity a “cease and desist” notice.
"Publishers need solutions that finely understand AI traffic, analyzing behavioral context, endpoint sensitivity, and traffic patterns so they can route it effectively and monetize their digital assets," added Guerrieri.
Recently, a report by Cloudflare discovered that Perplexity’s bots ignore – or sometimes don’t even fetch – robots.txt files. These files instruct web crawlers what they can or can’t access.
Quote:The owner of Rolling Stone, The Hollywood Reporter, and Variety has sued Google, alleging that its AI summaries use its reporting illegally and reduce traffic to its sites.
The lawsuit was filed by Penske Media, an American publishing conglomerate with over 120 million online visitors a month, in federal court in Washington, D.C in what is the first incident of a major US publisher taking Google to court over its AI summaries.
News organizations have long been accusing Google’s AI Overviews of stealing traffic from their sites. Earlier, online education company Chegg and a small Arkansas newspaper, the Helena World Chronicle, have both filed lawsuits against the tech giant.
Chegg claimed that Google’s AI summaries feature is reducing the company’s ability to compete and is eliminating demand for original content.
Companies, including Penske Media, argue that despite AI summaries offering links to the original source, readers often don’t feel the need to follow them. Additionally, Penske Media says that about 20% of Google searches that link to one of its sites now show AI Overviews, with the percentage continuously increasing.
Penske Media has also attributed a sharp drop in affiliate revenue — more than a third from its peak by the end of 2024 — to decreased traffic from Google. The complaint alleges that discouraging user traffic in such a way “will have profoundly harmful effects on the overall quality and quantity of the information accessible on the internet.”
The company added that it faces the choice of either blocking Google from listing its sites in its search results, which would be devastating to the business, or fuelling its AI summaries.
The lawsuit seeks unspecified monetary damages, as well as a permanent injunction against Google.
In response to the lawsuit, Google said that AI Overviews offers a better users experience and manages to send traffic to a wider variety of sites.
“With AI Overviews, people find search more helpful and use it more, creating new opportunities for content to be discovered,” Google spokesman José Castañeda said, according to The Wall Street Journal. “Every day, Google sends billions of clicks to sites across the web, and AI Overviews send traffic to a greater diversity of sites. We will defend against these meritless claims.”
Earlier in September, Google was allowed to keep its Chrome browser in a rare win for Big Tech in its battle with US antitrust enforcers. The decision, however, was not welcomed among many publishers who now don’t have a choice to opt out of AI Overviews.
On September 10th, Britannica Group, the company behind the 250-year-old Encyclopedia Britannica and Merriam-Webster, filed a lawsuit against Perplexity, saying its answer engine systematically scrapes its websites, unlawfully copies articles, and drives traffic away.
Quote:Microsoft and OpenAI said on Thursday they have signed a non-binding deal for new relationship terms that would allow OpenAI to proceed to restructure itself into a for-profit company, marking a new phase of the most high-profile partnerships to fund the ChatGPT frenzy.
Details on the new commercial arrangements were not disclosed, but the companies said they were working to finalize terms of a definitive agreement. This marks a step forward in OpenAI's prolonged talks with Microsoft as the former seeks to raise capital under a more common governance structure and eventually go public to fund artificial intelligence development.
Microsoft invested $1 billion in OpenAI in 2019 and another $10 billion at the beginning of 2023. Under their previous agreement, Microsoft had exclusive rights to sell OpenAI's software tools through its Azure cloud computing platform and had preferred access to the startup's technology.
Microsoft was once designated as OpenAI's sole compute provider, though it lessened its grip this year to allow OpenAI to pursue its own data center project, Stargate, including signing $300 billion worth of long-term contracts with Oracle, as well as another cloud deal with Google.
As OpenAI's revenue grows into the billions, it is seeking a more conventional corporate structure and partnerships with additional cloud providers to expand sales and secure the computing capacity needed to meet demand.
Microsoft, meanwhile, wants continued access to OpenAI's technology even if OpenAI declares its models have reached humanlike intelligence - a milestone that would end the current partnership under existing terms.
OpenAI said under current terms, its nonprofit arm will receive more than $100 billion — about 20% of the $500 billion valuation it is seeking in private markets — making it one of the most well-funded nonprofits, according to a memo from Bret Taylor, chairman of OpenAI's current nonprofit board.
The companies did not disclose how much of OpenAI Microsoft will own, nor whether Microsoft will retain exclusive access to OpenAI’s latest models and technology.
Regulatory hurdles remain for OpenAI, as attorneys general in California and Delaware need to approve OpenAI's new structure. The company hopes to complete the conversion by year's end, or risk losing billions in funding tied to that timeline.
Microsoft and OpenAI compete on products ranging from consumer chatbots to AI tools for businesses. Microsoft has also been working on developing its own AI models to reduce its dependence on OpenAI's technologies.
Quote:Microsoft is preparing to overhaul its Office 365 suite by adding artificial intelligence models from Anthropic — the latest sign of a growing rift with OpenAI, according to a report.
The software giant will pay to use Anthropic’s Claude models for some Office 365 Copilot features, according to two people involved in the project cited by The Information.
The decision follows internal testing that found Anthropic’s models outperformed OpenAI’s in generating PowerPoint decks and handling complex Excel functions.
The change blends Anthropic and OpenAI technology inside Microsoft’s most important business software, which serves more than 430 million paying users worldwide, according to The Information.
OpenAI’s flagship GPT-5 remains in use for several Copilot tasks, but Anthropic’s Claude Sonnet 4 will take over advanced work like spreadsheet automation and presentation design.
The move comes as Microsoft and OpenAI continue tense negotiations over OpenAI’s restructuring into a for-profit company ahead of an eventual public listing, The Information reported.
OpenAI’s plan to restructure into a public benefit corporation has triggered tough negotiations with Microsoft, its largest backer, over how much equity Microsoft will hold and what level of privileged access it will retain to future AI models.
The drawn-out talks, along with regulatory scrutiny in California, have delayed OpenAI’s IPO until at least 2026, complicating liquidity plans for investors and employees.
Investors including SoftBank are pressing for resolution, while OpenAI’s nonprofit parent insists it will maintain control to ensure its mission of developing AI for the benefit of humanity remains central.
OpenAI declined to comment when reached by The Information. Anthropic and Amazon Web Services, which hosts Anthropic’s models, also did not respond to The Information’s requests for comment.
A Microsoft spokesperson told The Information: “OpenAI will continue to be our partner on frontier models and we remain committed to our long-term partnership.”
Unlike its arrangement with OpenAI, where Microsoft’s deep financial investment grants free use of the startup’s models, Microsoft will pay AWS to access Anthropic’s technology.
Quote:Amid a rash of suicides, the company behind ChatGPT could start alerting police over youth users pondering taking their own lives, the firm’s CEO and co-founder, Sam Altman, announced. The 40-year-old OpenAI boss dropped the bombshell during a recent interview with conservative talk show host Tucker Carlson.
It’s “very reasonable for us to say in cases of young people talking about suicide, seriously, where we cannot get in touch with the parents, we do call authorities,” the techtrepreneur explained. “Now that would be a change because user privacy is really important.”
The change reportedly comes after Altman and OpenAI were sued by the family of Adam Raine, a 16-year-old California boy who committed suicide in April after allegedly being coached by the large language learning model. The teen’s family alleged that the deceased was provided “step-by-step playbook” on how to kill himself — including tying a noose to hang himself and composing a suicide note — before he took his own life.
Following his untimely death, the San Francisco AI firm announced in a blog post that it would install new security features that allowed parents to link their accounts to their own, deactivate functions like chat history, and receive alerts should the model detect “a moment of acute distress.”
It’s yet unclear which authorities will be alerted — or what info will be provided to them — under Altman’s proposed policy. However, his announcement marks a departure from ChatGPT’s prior MO for dealing with, which involved urging those displaying suicidal ideation to “call the suicide hotline,” the Guardian reported.
Under the new guardrails, the OpenAI bigwig said that he would be clamping down on teens attempting to hack the system by prospecting for suicide tips under the guise of researching a fiction story or a medical paper.
Altman believes that ChatGPT could unfortunately be involved in more suicides than we’d like to believe, claiming that worldwide, “15,000 people a week commit suicide,” and that about “10% of the world are talking to ChatGPT.”
“That’s like 1,500 people a week that are talking, assuming this is right, to ChatGPT and still committing suicide at the end of it,” the techtrepreneur explained. “They probably talked about it. We probably didn’t save their lives.”
He added, “Maybe we could have said something better. Maybe we could have been more proactive.
Quote:Therapists are turning to ChatGPT in secret, raising alarms over patient trust, privacy, and whether AI belongs in therapy at all.
I used to be a regular therapy-goer, and I would have been mortified if my shrink had been using AI during our Zoom meets.
Luckily, my therapist was all about riding off the subject matter we’d both produced in the moment, though I’m sure some nifty specialists would still be able to shift to an LLM, sub rosa.
For me, my therapy sessions were built on imperfections, and no, I don’t mean me being damaged goods, but the ebb and flow of a curative rivulet.
As the MIT Technology Review recently explored, patient trust collapses when your therapist might be using contrived responses when using AI – they’d better be careful when sharing the screen!
Cybernews spoke to a couples’ and individual therapist, Thomas Westenholz, to help gauge the ethics after all.
Wetenholz believes it should be a case of transparency on the therapist's part:
“Therapy is about honesty and openness. If a client discovers their therapist has been relying on ChatGPT behind the scenes without disclosure, it risks breaking that bond of safety.”
My therapist would keep me in the loop with her back-to-back schedule. In fact, I wouldn’t be surprised if she had somewhere between six to eight clients in a working day, from the range of slots she offered me.
If a counsellor is feeling burned out or the turnaround time for a particular problem is particularly time-sensitive, it’s not difficult to see why they would take shortcuts, especially when something is brought up in the moment.
However, solutions are often found together, with the client – and if some scripted notes were being read off to me, I’d feel let down.
Sure, AI could occasionally come in to prepare some guidance before a session, but like a teacher, it should remain mainly for admin tasks.
“AI may have a supportive role, helping therapists with admin, generating psychoeducational resources, or providing prompts for reflection. But it should never replace the human presence that therapy depends on,” shared Westenholz.
CYBER-WANTED
Quote:A massive bounty has been placed on the head of one “high-value” cybercriminal associated with the “LockerGoga” and “MegaCortex” ransomware gangs.
A “LockerGoga” and “MegaCortex” ransomware administrator has been added to the “EU Most Wanted List” following an indictment by the Department of Justice (DoJ).
The alleged prolific cybercriminal, Ukrainian national Volodymyr Viktorovich Tymoshchuk, has been added to the list and includes a bounty of $10 million for any information leading to his arrest.
The Department of Justice released a statement charging Tymoshchuk with various offenses, including fraud, intentional damage to protected computers, and other hacking-related charges.
Europol has him down for computer-related crime and participation in a criminal organization, alongside racketeering and extortion.
This is because Tymoshchuk, known by the monikers “deadforz,” “Boba,” “msfv,” and “farnetwork,” is an alleged administrator of the LockerGoga, MegaCortex, and Nefilim ransomware schemes that robbed more than 250 companies of billions.
The indictment alleges that Tymoshchuk used the ransomware variants to encrypt worldwide computer networks, including those in the US, France, Germany, the Netherlands, Norway, and Switzerland.
The attacks supposedly caused companies to lose millions of dollars, as they needed to remediate costs, pay ransoms, and repair damage to computer systems.
Europol estimates that the total financial damage caused by the cybercrime group has reached upwards of $18 billion worldwide.
What’s novel about Tymoshchuk and his cybercrime organization’s tactics is that each ransomware file was customized to fit each individual victim.
Quote:The Scattered Spider ransomware group, and more than a dozen other hacker buddies, abruptly decide to close up shop – apparently, because the pressure from law enforcement agencies has become too hot to handle.
“Our objectives having been fulfilled, it is now time to say goodbye,” Scattered Spider wrote in a farewell letter addressed to the "World," penned by the ransomware gang’s apparent "leader and representative."
The announcement was posted on the gang’s recently created Telegram channel on Thursday, along with a link to the goodbye missive hosted on a webpage run by the notorious BreachForums. "End of an era, bye.... 💔," the group writes in its last Telegram post.
“We LAPSUS$, Trihash, Yurosh, yaxsh, WyTroZz, N3z0x, Nitroz, TOXIQUEROOT, Prosox, Pertinax, Kurosh, Clown, IntelBroker, Scattered Spider, Yukari, and among many others, have decided to go dark,” the letter states.
The now short-lived Telegram venture, created on August 30th, was thought to be a collaboration between three well-known threat actors: Scattered Spider, LAPSUS$, and Shiny Hunters.
The channel – “scattered LAPSUS$ hunters 4.0” – was started only one day before luxury automaker, Jaguar Land Rover, announced it had suffered a massive breach, forcing it to shut down operations, allegedly at the hands of the rebranded group.
The cybercriminal trio, which claims on Telegram to “only exist to destroy the FBI,” has been rampant with provocative posts since the JLR attack.
Many of the hundreds of posted messages have been filled with complete jibberish, foul language, or scribbled on, as portrayed in the group’s final post featuring what appears to be a hacked US government database.
The gang has been using the channel to taunt, not only JLR, Salesforce, Marks&Spencer, and other victims, but also the FBI, Google’s Mandiant, the UK National Crime Agency (NCA), as well as threatening more attacks on other critical targets.
“As you know, the last weeks have been hectic. Whilst we were diverting you, the FBI, Mandiant, and a few others by paralyzing Jaguar factories, (superficially) hacking Google 4 times, blowing up Salesforce and CrowdStrike defences, the final parts of our contingency plans were being activated,” the group writes.
Pressure from authorities triggers shut down
Scattered Spider made waves this spring by hitting British retail giants Marks & Spencer, Harrods, and Co-op, and has recently been connected, along with Shiny Hunters, to the recent Salesloft Drift/Salesforce hacking campaign, which hit more than 700 companies worldwide this summer.
Four members of the Shiny Hunters gang were arrested in June by French authorities, a fact Scattered Spider repeatedly brings up on Telegram calling Shiny its "BFF foreber [sic]."
BIG TECH
Quote:Facebook parent Meta Platforms put profit from its virtual-reality platform over safety, two former researchers told a Senate panel on Tuesday.
Former Meta user experience researcher Cayce Savage said the company shut down internal research showing Meta knew children were using its VR products and being exposed to sexually explicit material.
“Meta cannot be trusted to tell the truth about the safety or use of its products,” Savage said at the hearing before the Senate subcommittee on privacy and technology.
Meta has come under fire from members of Congress in recent weeks, after Reuters exclusively reported on an internal policy document that permitted the company’s chatbots to “engage a child in conversations that are romantic or sensual.”
“Does it surprise you that they would allow their chatbot to engage in these conversations with children?” Senator Marsha Blackburn, a Tennessee Republican, asked former Meta Reality Labs researcher Jason Sattizahn, who also testified at the hearing on Tuesday.
“No, not at all,” he said.
Meta has previously said the examples reported by Reuters were inconsistent with the company’s policies and had been removed.
Savage and Sattizahn are part of a group of current and former Meta employees whose whistleblower claims were first reported by the Washington Post on Monday.
Researchers were told not to investigate harms to children using its VR technology so that it could claim ignorance of the problem, Savage said. Savage encountered instances of children being bullied, sexually assaulted and asked for nude photographs in the course of her work, she said.
Meta spokesperson Andy Stone said in a statement that the claims are “based on selectively leaked internal documents that were picked specifically to craft a false narrative,” and that “there was never any blanket prohibition on conducting research with young people.”
Quote:Google privately told a federal court that “the open web is already in rapid decline,” a sharp reversal from its public claims that search traffic is booming — and a stunning admission as the Justice Department pushes to dismantle its ad tech empire.
The disclosure surfaced in a filing last week in the government’s ongoing antitrust case against Google, according to court documents flagged by Search Engine Roundtable and industry analyst Jason Kint.
For months, Google executives have insisted that the web is “thriving” and that the company’s AI-powered search tools are sending traffic to more publishers than ever.
A Google spokesperson told The Post on Tuesday that the filing has been amended to avoid confusion about what the company was referring to, which is “open web display advertising” — not “open-web.”
“This is one cherry-picked line that misrepresents our legal filing – it’s clear from the preceding sentence that we’re referring to ‘open-web display advertising’ and not the open web as a whole,” a Google spokesperson told The Post.
“We are pointing out the obvious: that investments in non-open web display advertising like connected TV and retail media are growing at the expense of those in open web display advertising.”
Google argued that a forced divestiture of its advertising business would “only accelerate” the collapse of the open web and “harm publishers who currently rely on open-web display advertising revenue.”
The Justice Department is seeking remedies that could include breaking up Google’s advertising technology unit, which dominates the market for digital ads.
Prosecutors argue the company’s stranglehold on ad buying and selling has crushed competition and squeezed publishers.
Google, in its filing, said market forces — not regulators — are reshaping the industry, pointing to AI, connected TV and retail media as areas where advertisers are flocking.
“The fact is that today, the open web is already in rapid decline and Plaintiffs’ divestiture proposal would only accelerate that decline,” Google wrote.
“As the law makes clear, the last thing a court should do is intervene to reshape an industry that is already in the midst of being reshaped by market forces.”
The filing appeared to be at odds with recent reassurances from top Google brass.
In May, CEO Sundar Pichai told Decoder that the company is “definitely sending traffic to a wider range of sources and publishers” since rolling out AI search tools.
Nick Fox, Google’s senior vice president of knowledge, echoed that view on the “AI Inside” podcast, saying that “from our point of view, the web is thriving.”
Quote:Microsoft avoided a possible hefty EU antitrust fine by offering customers reduced prices for Office products excluding Teams, a move that comes amid rising tensions with the US over EU scrutiny of Big Tech.
The case was triggered by a 2020 complaint from Salesforce-owned Slack Technologies Inc to the European Commission, which accused Microsoft of bundling its chat and video app Teams with its Office product to gain an unfair advantage over rivals.
German rival alfaview filed a similar complaint in 2023.
Microsoft has agreed to widen the price gap by 50% between certain Microsoft 365 and Office 365 suites that exclude Teams and their equivalent versions that include Teams, including suites targeted at businesses, the EU competition enforcer said on Friday, confirming a Reuters story in May.
The price gap will range from 1 euro to 8 euros and remain in effect for seven years. The US software giant also committed to enhancing interoperability to facilitate competition for a period of 10 years. Microsoft's offer will be implemented globally.
European customers will also be able to export their Teams messaging data to rivals.
"Today's decision therefore opens up competition in this crucial market, and ensures that businesses can freely choose the communication and collaboration product that best suits their needs," EU antitrust chief Teresa Ribera said in a statement.
Ribera riled US President Trump last week after she slapped a €2.95 billion ($3.5 billion) fine on Alphabet's Google over its adtech practices, which he said was unfair and discriminatory, adding that the US may retaliate with more tariffs.
Nanna-Louise Linde, a Microsoft vice president for European government affairs, said in a statement: "We appreciate the dialogue with the Commission that led to this agreement, and we turn now to implementing these new obligations promptly and fully."
Alfaview Chief executive Niko Fostiropoulos said Microsoft's remedies would boost Europe's digital ambition.
"It sends an important signal for Europe's digital sovereignty: fair market conditions not only promote technological diversity, but also secure the long-term innovative strength of the European market," he said.
Microsoft, which has been fined a total €2.2 billion in past years for bundling two or more products together and other marketing tactics, has taken a more conciliatory approach with EU antitrust regulators in recent years.
EU antitrust fines can be as much as 10% of a company's global annual turnover.
Quote:Google and Amazon reportedly face a Federal Trade Commission probe over whether they are misleading companies that buy ads on their websites.
The FTC, led by Republican chairman Andrew Ferguson, is looking into whether the Big Tech giants have been transparent about the terms and pricing of their ad deals, Bloomberg reported, citing unnamed sources.
For Amazon, FTC officials want information on Amazon’s auction process and whether it informed clients about its “reserve pricing” for some ads – which refers to the minimum price that must be paid to buy ad space on the company’s website.
Meanwhile, Google is being probed about its internal ad pricing practices and whether it has quietly boosted the cost of ads without properly informing customers.
The FTC, Google and Amazon declined to comment on the probes, which are reportedly still ongoing and being led by FTC’s consumer protection unit.
The investigation marks another regulatory headache for both Google and Amazon, each of which face federal antitrust cases that are going to trial on Sept. 22.
The FTC is suing Amazon for allegedly enrolling customers in its Prime subscription service without their knowledge.
Elsewhere, a federal judge will consider remedies, including a potential Google breakup, after earlier finding that the search giant operates illegal monopolies in the digital advertising sector.
That case was brought by the Justice Department.
Google dodged a major crackdown earlier this month after US District Judge Amit Mehta rejected the DOJ’s recommendations that it be forced to sell off its Chrome web browser and be barred from paying billions of dollars to ensure its search engine is the default option on most smartphones.
Mehta instead decided that Google should share more data with rivals and be allowed to make payments to companies like Apple for default status, as long as the deals aren’t exclusive.
His ruling was universally panned by critics as a “slap on the wrist” and far too weak to open up competition.
CYBER-CONTROL
Quote:Germany’s Digital Committee is opposed to the current child sexual abuse material (CSAM) regulation, also known as the “chat control” proposal.
The European Commission wants to introduce a law requiring chat services to monitor all messages sent by their users.
In the past three years, numerous proposals have been discussed at the EU level, but an agreement still hasn’t been reached. This has to do with the fact that implementing client-side scanning requires weakening encryption. A majority of the EU Member States disagree and insist that encryption remains intact.
The current proposal calls for phones to be pre-installed with software to detect CSAM content before users send it. This would require scanning to take place on the phone itself.
Germany argues that the current proposal is too much of a violation of user privacy. The country is pushing for a “united compromise proposal” that would be acceptable to more countries.
“A uniform legal basis in the EU is urgently needed because the situation is worrisome. Private, confidential exchanges must continue to be private. At the same time, there is an obligation to counteract child abuse online. The aim of the black-red coalition is therefore to achieve a united stance between the departments,” a German representative of the Federal Ministry of the Interior stated in a press release.
In June, a coalition of international nonprofit organizations, including Amnesty International Germany and Chaos Computer Club (CCC), called on the German government to vote against European proposals that would introduce chat control.
“End-to-end encryption is an indispensable foundation for digital security. It protects the confidential communication of all people, companies and authorities and ensures the integrity of democratic institutions. Deliberately weakening encryption undermines trust in digital infrastructures and opens attack vectors to state and criminal actors,” the coalition wrote in an open letter addressed to the German government.
Last month, the Belgian coalition party N-VA opposed the current proposal’s idea of chat control.
“The goal is, of course, legitimate, but chat control threatens to become a monster that invades your privacy and that you can no longer tame,” N-VA Member of Parliament Michael Freilich told Belgian news outlet Het Laatste Nieuws.
SPOTIFY
Quote:Spotify has finally launched lossless audio — for now, available only to Spotify Premium listeners in select markets — to significantly improve its audio quality.
Rumors about the new feature have been circulating since 2017, with the company putting the release off a few times over the years. But it’s finally here, after months of rumors and speculation.
“The wait is finally over; we’re so excited lossless sound is rolling out to Premium subscribers,” said Gustav Gyllenhammar, VP Subscriptions, Spotify. “We’ve taken time to build this feature in a way that prioritizes quality, ease of use, and clarity at every step, so you always know what’s happening under the hood. With Lossless, our premium users will now have an even better listening experience.”
Spotify has long been criticized for having lower music quality than its direct competitors, such as Apple and Amazon Music, but now its offering should be on par with streaming giants.
Lossless offers users an opportunity to stream tracks in up to 24-bit/44.1 kHz FLAC, “in the highest quality”. They will also be able to choose between Low, Normal, High, Very High, and now Lossless music quality for data usage optimization.
Users will need to enable Lossless manually on each device, with the indicator appearing in the Now Playing view or bar and via the Connect Picker when the feature is on.
Earlier reports suggested that Spotify might hide its Lossless offering behind a higher subscription tier, but the feature will be rolling out to its Premium users on September 10th in 50 markets. Initially, they will include Australia, Austria, Czechia, Denmark, Germany, Japan, New Zealand, the Netherlands, Portugal, Sweden, the US, and the UK.
The feature arrives just weeks after Spotify launched its direct messaging, supposed to make music sharing easier. However, users started noticing weird things, such as their identities being exposed across the internet whenever they share a song.
The important question that many are asking is whether Lossless and direct messaging will one day become part of a new ‘Super Premium’ tier on Spotify. Currently, there are reports that Spotify is hoping to roll out its new Music Pro tier late this year, with prices for Music Pro expected to vary by geography, according to Bloomberg.
CYBER-ALERT
Quote:The Federal Communications Commission (FCC) has warned consumers about a new phishing scam targeting Amazon Prime Video and cable streaming service users. The scam promises to slash customers' bills in half.
The consumer watchdog says the scammers are bombarding the intended victims with fraudulent texts, robocalls, and voicemails offering 50% discounts on their monthly bills.
So far, the FCC has received complaints from Amazon Prime Video, Comcast Xfinity, and Spectrum TV and internet streaming services subscribers.
According to the consumer advisory, the bad actors send a pre-recorded message – a practice known as vishing – pretending to be from the company, urging the subscriber to call a given phone number to receive the discount.
In other cases, the scammers will send the subscriber a fake text message – a phishing technique known as smishing – using the same premise.
Caught on tape
In one 42-second call recorded by the FCC, you can hear the fraudsters deliberately trying to instill a sense of urgency in the victim by using words such as “immediately” and “set to expire today,” a common tactic used in sophisticated phishing scams.
“Hello. This is Comcast Xfinity. We're reviewing your account, and it appears your 50 percent discount on your monthly bill is set to expire today. To confirm and secure your savings, call the number displayed on your caller ID immediately. This offer cannot be extended. Thank you.”
- Vishing voicemail transcript
The FCC also noted that those who do call the number back are often encouraged to act quickly to ensure they keep the “discount.”
In many cases, the bad actors would tell the victim the only way to get the discount would be “to prepay for multiple months of service using a gift card,” the FCC said.
“Pressure to act quickly and only accepting gift cards as payment are sure signs of a scam,” the FCC tells consumers.
"For God has not destined us for wrath, but for obtaining salvation through our Lord Jesus Christ," 1 Thessalonians 5:9
Maranatha!
The Internet might be either your friend or enemy. It just depends on whether or not she has a bad hair day.
![[Image: SP1-Scripter.png]](https://www.save-point.org/images/userbars/SP1-Scripter.png)
![[Image: SP1-Writer.png]](https://www.save-point.org/images/userbars/SP1-Writer.png)
![[Image: SP1-Poet.png]](https://www.save-point.org/images/userbars/SP1-Poet.png)
![[Image: SP1-PixelArtist.png]](https://www.save-point.org/images/userbars/SP1-PixelArtist.png)
![[Image: SP1-Reporter.png]](https://i.postimg.cc/GmxWbHyL/SP1-Reporter.png)
My Original Stories (available in English and Spanish)
List of Compiled Binary Executables I have published...
HiddenChest & Roole
Give me a free copy of your completed game if you include at least 3 of my scripts!
Just some scripts I've already published on the board...
KyoGemBoost XP VX & ACE, RandomEnkounters XP, KSkillShop XP, Kolloseum States XP, KEvents XP, KScenario XP & Gosu, KyoPrizeShop XP Mangostan, Kuests XP, KyoDiscounts XP VX, ACE & MV, KChest XP VX & ACE 2016, KTelePort XP, KSkillMax XP & VX & ACE, Gem Roulette XP VX & VX Ace, KRespawnPoint XP, VX & VX Ace, GiveAway XP VX & ACE, Klearance XP VX & ACE, KUnits XP VX, ACE & Gosu 2017, KLevel XP, KRumors XP & ACE, KMonsterPals XP VX & ACE, KStatsRefill XP VX & ACE, KLotto XP VX & ACE, KItemDesc XP & VX, KPocket XP & VX, OpenChest XP VX & ACE
Maranatha!
The Internet might be either your friend or enemy. It just depends on whether or not she has a bad hair day.
![[Image: SP1-Scripter.png]](https://www.save-point.org/images/userbars/SP1-Scripter.png)
![[Image: SP1-Writer.png]](https://www.save-point.org/images/userbars/SP1-Writer.png)
![[Image: SP1-Poet.png]](https://www.save-point.org/images/userbars/SP1-Poet.png)
![[Image: SP1-PixelArtist.png]](https://www.save-point.org/images/userbars/SP1-PixelArtist.png)
![[Image: SP1-Reporter.png]](https://i.postimg.cc/GmxWbHyL/SP1-Reporter.png)
My Original Stories (available in English and Spanish)
List of Compiled Binary Executables I have published...
HiddenChest & Roole
Give me a free copy of your completed game if you include at least 3 of my scripts!

Just some scripts I've already published on the board...
KyoGemBoost XP VX & ACE, RandomEnkounters XP, KSkillShop XP, Kolloseum States XP, KEvents XP, KScenario XP & Gosu, KyoPrizeShop XP Mangostan, Kuests XP, KyoDiscounts XP VX, ACE & MV, KChest XP VX & ACE 2016, KTelePort XP, KSkillMax XP & VX & ACE, Gem Roulette XP VX & VX Ace, KRespawnPoint XP, VX & VX Ace, GiveAway XP VX & ACE, Klearance XP VX & ACE, KUnits XP VX, ACE & Gosu 2017, KLevel XP, KRumors XP & ACE, KMonsterPals XP VX & ACE, KStatsRefill XP VX & ACE, KLotto XP VX & ACE, KItemDesc XP & VX, KPocket XP & VX, OpenChest XP VX & ACE