Go Here to Read this Fast! This is my favorite Samsung Galaxy S24 Ultra deal right now
Originally appeared here:
This is my favorite Samsung Galaxy S24 Ultra deal right now
Go Here to Read this Fast! This is my favorite Samsung Galaxy S24 Ultra deal right now
Originally appeared here:
This is my favorite Samsung Galaxy S24 Ultra deal right now
Go Here to Read this Fast! Amazon just knocked a massive 45% off this 75-inch ULED 4K TV
Originally appeared here:
Amazon just knocked a massive 45% off this 75-inch ULED 4K TV
Go Here to Read this Fast! Do this if you want to watch the Super Bowl for free (without cable)
Originally appeared here:
Do this if you want to watch the Super Bowl for free (without cable)
The Oversight Board is urging Meta to update its manipulated media policy, calling the current rules “incoherent.” The admonishment comes in a closely watched decision about a misleadingly edited video of President Joe Biden.
The board ultimately sided with Meta regarding its decision to not remove the clip at the center of the case. The video featured footage from October 2022, when the president accompanied his granddaughter who was voting in person for the first time. News footage shows that after voting, he placed an “I voted” sticker on her shirt. A Facebook user later shared an edited version that looped the moment so it appeared as if he repeatedly touched her chest. The caption accompanying the clip called him a “sick pedophile,” and said those who voted for him were “mentally unwell.”
In its decision, the Oversight Board said that the video was not a violation of Meta’s narrowly-written manipulated media policy because it was not edited with AI tools, and because the edits were “obvious and therefore unlikely to mislead” most users. “Nevertheless, the Board is concerned about the Manipulated media policy in its current form, finding it to be incoherent, lacking in persuasive justification and inappropriately focused on how content has been created rather than on which specific harms it aims to prevent (for example, to electoral processes),” the board wrote. “Meta should “reconsider this policy quickly , given the number of elections in 2024.”
The company’s current rules only apply to videos that are edited with AI, but don’t cover other types of editing that could be misleading. In its policy recommendations to Meta, the Oversight Board says it should write new rules that cover audio and video content. The policy should apply not just to misleading speech but “content showing people doing things they did not do.” The board says these rules should apply “regardless of the method of creation.” Furthermore, the board recommends that Meta should no longer remove posts with manipulated media if the content itself isn’t breaking any other rules. Instead, the board suggests Meta “apply a label indicating the content is significantly layered and may mislead.”
The recommendations underscore mounting concern among researchers and civil society groups about how the surge in AI tools could enable a new wave of viral election misinformation. In a statement, a Meta spokesperson said the company is “reviewing the Oversight Board’s guidance and will respond publicly” within the next 60 days. While that response would come well before the 2024 presidential election, it’s unclear when, or if, any policy changes may come. The Oversight Board writes in its decision that Meta representatives indicated the company “plans to update the Manipulated Media policy to respond to the evolution of new and increasingly realistic AI.”
This article originally appeared on Engadget at https://www.engadget.com/maliciously-edited-joe-biden-video-can-stay-on-facebook-metas-oversight-board-says-110042024.html?src=rss
Bad actors keep using deepfakes for everything from impersonating celebrities to scamming people out of money. The latest instance is out of Hong Kong, where a finance worker for an undisclosed multinational company was tricked into remitting $200 million Hong Kong dollars ($25.6 million).
According to Hong Kong police, scammers contacted the employee posing as the company’s United Kingdom-based chief financial officer. He was initially suspicious, as the email called for secret transactions, but that’s where the deepfakes came in. The worker attended a video call with the “CFO” and other recognizable members of the company. In reality, each “person” he interacted with was a deepfake — likely created using public video clips of the actual individuals.
The deepfakes asked the employee to introduce himself and then quickly instructed him to make 15 transfers comprising the $25.6 million to five local bank accounts. They created a sense of urgency for the task, and then the call abruptly ended. A week later, the employee checked up on the request within the company, discovering the truth.
Hong Kong police have arrested six people so far in connection with the scam. The individuals involved stole eight identification cards and had filed 54 bank account registrations and 90 loan applications in 2023. They had also used deepfakes to trick facial recognition software in at least 20 cases.
The widespread use of deepfakes is one of the growing concerns of evolving AI technology. In January, Taylor Swift and President Joe Biden were among those whose identities were forged with deepfakes. In Swift’s case, it was nonconsensual pornographic images of her and a financial scam targeting potential Le Creuset shoppers. President Biden’s voice could be heard in some robocalls to New Hampshire constituents, imploring them not to vote in their state’s primary.
This article originally appeared on Engadget at https://www.engadget.com/scammers-use-deepfakes-to-steal-256-million-from-a-multinational-firm-034033977.html?src=rss
Go Here to Read this Fast! Scammers use deepfakes to steal $25.6 million from a multinational firm
Originally appeared here:
Scammers use deepfakes to steal $25.6 million from a multinational firm
Originally appeared here:
Samsung Chairman Lee acquitted of financial crimes over merger of companies
Go Here to Read this Fast! Vision Pro durability test turns up a few surprises
Originally appeared here:
Vision Pro durability test turns up a few surprises
One of Vision Pro’s most intriguing features is undoubtedly the EyeSight display, which projects a visual feed of your own eyes to better connect with people in the real world — because eye contact matters, be it real or virtual. As iFixit discovered in its teardown, it turns out that Apple leveraged stereoscopic 3D effect as an attempt to make your virtual eyes look more life-like, as opposed to a conventional “flat” output on the curved OLED panel. This is achieved by stacking a widening optical layer and a lenticular lens layer over the OLED screen, which is why exposing the panel will show “some very oddly pinched eyes.” The optical nature of the added layers also explain the EyeSight display’s dim output. Feel free to check out the scientific details in the article.
While iFixit has yet to do more analysis before it can give the Vision Pro a repairability score, so far we already know that the front glass panel “took a lot of heat and time” to detach from the main body. That said, the overall modular design — especially the speakers and the external battery — should win some points. As always, head over to iFixit for some lovely close-up shots of the teardown process.
This article originally appeared on Engadget at https://www.engadget.com/apple-vision-pro-teardown-deconstructs-the-weird-looking-eyesight-display-083426548.html?src=rss
Go Here to Read this Fast! Apple Vision Pro teardown deconstructs the weird-looking EyeSight display
Originally appeared here:
Apple Vision Pro teardown deconstructs the weird-looking EyeSight display
Go Here to Read this Fast! NYT Connections answers today for February 5
Originally appeared here:
NYT Connections answers today for February 5