Major Bitcoin mining firm Mara Holdings (formerly Marathon Digital) has reportedly bought a major wind farm in Texas to help charge its mining ambitions.
Streamr and JDI have entered a strategic partnership to facilitate home mining with the launch of Terminal Multi-Miner. JDI is a leader and venture capital in decentralized physical infrastructure networks (DePIN), and Streamr is a decentralized real-time data network. Terminal Multi-Miner offers multi-token mining capabilities and decentralized protocol participation, allowing users to engage with DePIN […]
Pepeto, a memecoin project designed to integrate cross-chain utility with community-driven development, has continued to make waves as it prepares for its grand entry into the crypto market. While Pepeto has continued to draw notable interest from the crypto community with its unique narrative and innovative tools, this meme coin project revealed it raised over […]
Oxya Origin is revolutionizing the blockchain gaming landscape with its player-centric universe and seamlessly interconnected games. This interconnected ecosystem highlights the importance of community involvement and seamless interaction, delivering an engaging experience for gamers. This December heralds two significant events: the Alpha launch of Road to Genesis and the Token Generation Event (T.G.E.) for the […]
What happens when AI generated media becomes ubiquitous in our lives? How does this relate to what we’ve experienced before, and how does it change us?
This is the first part of a two part series I’m writing analyzing how people and communities are affected by the expansion of AI generated content. I’ve already talked at some length about the environmental, economic, and labor issues involved, as well as discrimination and social bias. But this time I want to dig in a little and focus on some psychological and social impacts from the AI generated media and content we consume, specifically on our relationship to critical thinking, learning, and conceptualizing knowledge.
History
Hoaxes have been perpetrated using photography essentially since its invention. The moment we started having a form of media that was believed to show us true, unmediated reality of phenomena and events, was the moment that people started coming up with ways to manipulate that form of media, to great artistic and philosophical effect. (As well as humorous or simply fraudulent effect.) We have a form of unwarranted trust in photographs, despite this, and we have developed a relationship with the form that balances between trust and skepticism.
When I was a child, the internet was not yet broadly available to the general public, and certainly very few homes had access to it, but by the time I was a teenager that had completely changed, and everyone I knew spent time on AOL instant messenger. Around the time I left graduate school, the iPhone was launched and the smartphone era started. I retell all this to make the point that cultural creation and consumption changed startlingly quickly and beyond recognition in just a couple of decades.
I think the current moment represents a whole new era specifically in the media and cultural content we consume and create, because of the launch of generative AI. It’s a little like when Photoshop became broadly available, and we started to realize that photos were sometimes retouched, and we began to question whether we could trust what images looked like. (Readers may find the ongoing conversation around “what is a photograph” an interesting extension of this issue.) But even then, Photoshop was expensive and had a skill level requirement to use it effectively, so most photos we encountered were relatively true to life, and I think people generally expected that images in advertising and film were not going to be “real”. Our expectations and intuitions had to adjust to the changes in technology, and we more or less did.
Current Day
Today, AI content generators have democratized the ability to artificially produce or alter any kind of content, including images. Unfortunately, it’s extremely difficult to get an estimate of how much of the content online may be AI-generated — if you google this question you’ll get referencesto an article from Europol claiming it says that the number will be 90% by 2026 — but read it and you’ll see that the research paper says nothing of the sort. You might also find a paper by some AWS researchersbeingcited, saying that 57% is the number — but that’s also a mistaken reading (they’re talking about text content being machine translated, not text generated from whole cloth, to say nothing of images or video). As far as I can tell, there’s no reliable, scientifically based work indicating actually how much of the content we consume may be AI generated — and even if it did, the moment it was published it would be outdated.
But if you think about it, this is perfectly sensible. A huge part of the reason AI generated content keeps coming is because it’s harder than ever before in human history to tell whether a human being actually created what you are looking at, and whether that representation is a reflection of reality. How do you count something, or even estimate a count, when it’s explicitly unclear how you can identify it in the first place?
I think we all have the lived experience of spotting content with questionable provenance. We see images that seem to be in the uncanny valley, or strongly suspect that a product review on a retail site sounds unnaturally positive and generic, and think, that must have been created using generative AI and a bot. Ladies, have you tried to find inspiration pictures for a haircut online recently? In my own personal experience, 50%+ of the pictures on Pinterest or other such sites are clearly AI generated, with tell-tale signs: textureless skin, rubbery features, straps and necklaces disappearing into nowhere, images explicitly not including hands, never showing both ears straight on, etc. These are easy to dismiss, but a large swath makes you question whether you’re seeing heavily filtered real images or wholly AI generated content. I make it my business to understand these things, and I’m often not sure myself. I hear tell that single men on dating apps are so swamped with scamming bots based on generative AI that there’s a name for the way to check — the “Potato Test”. If you ask the bot to say “potato” it will ignore you, but a real human person will likely do it. The small, everyday areas of our lives are being infiltrated by AI content without anything like our consent or approval.
Why?
What’s the point of dumping AI slop in all these online spaces? The best case scenario goal may be to get folks to click through to sites where advertising lives, offering nonsense text and images just convincing enough to get those precious ad impressions and get a few cents from the advertiser. Artificial reviews and images for online products are generated by the truckload, so that drop-shippers and vendors of cheap junk can fool customers into buying something that’s just a little cheaper than all the competition, letting them hope they’re getting a legitimate item. Perhaps the item can be so incredibly cheap that the disappointed buyer will just accept the loss and not go to the trouble of getting their money back.
Worse, bots using LLMs to generate text and images can be used to lure people into scams, and because the only real resource necessary is compute, the scaling of such scams costs pennies — well worth the expense if you can steal even one person’s money every so often. AI generated content is used for criminal abuse, including pig butchering scams, AI-generatedCSAM and non-consensual intimate images, which can turn into blackmail schemes as well.
There are also political motivations for AI-generated images, video, and text — in this US election year, entities all across the world with different angles and objectives produced AI-generated images and videos to support their viewpoints, and spewed propagandistic messages via generative AI bots to social media, especially on the former Twitter, where content moderation to prevent abuse, harassment, and bigotry has largely ceased. The expectation from those disseminating this material is that uninformed internet users will absorb their message through continual, repetitive exposure to this content, and for every item they realize is artificial, an unknown number will be accepted as legitimate. Additionally, this material creates an information ecosystem where truth is impossible to define or prove, neutralizing good actors and their attempts to cut through the noise.
A small minority of the AI-generated content online will be actual attempts to create appealing images just for enjoyment, or relatively harmless boilerplate text generated to fill out corporate websites, but as we are all well aware, the internet is rife with scams and get-rich-quick schemers, and the advances of generative AI have brought us into a whole new era for these sectors. (And, these applications have massive negative implications for real creators, energy and the environment, and other issues.)
Where we Are
I’m painting a pretty grim picture of our online ecosystems, I realize. Unfortunately, I think it’s accurate and only getting worse. I’m not arguing that there’s no good use of generative AI, but I’m becoming more and more convinced that the downsides for our society are going to have a larger, more direct, and more harmful impact than the positives.
I think about it this way: We’ve reached a point where it is unclear if we can trust what we see or read, and we routinely can’t know if entities we encounter online are human or AI. What does this do to our reactions to what we encounter? It would be silly to expect our ways of thinking to not change as a result of these experiences, and I worry very much that the change we’re undergoing is not for the better.
The ambiguity is a big part of the challenge, however. It’s not that we know that we’re consuming untrustworthy information, it’s that it’s essentially unknowable. We’re never able to be sure. Critical thinking and critical media consumption habits help, but the expansion of AI generated content may be outstripping our critical capabilities, at least in some cases. This seems to me to have a real implication for our concepts of trust and confidence in information.
In my next article, I’ll discuss in detail what kind of effects this may have on our thoughts and ideas about the world around us, and consider what, if anything, our communities might do about it.
Also, regular readers will know I publish on a two week schedule, but I am moving to a monthly publishing cadence going forward. Thank you for reading, and I look forward to continuing to share my ideas!
Apple Watch collects much more health data than users realize. Let’s walk through some of the lesser-known metrics so that you can better leverage them to better your health.
Reviewing our sleep data on Apple Watch
For years now, Apple has been boasting about the health impacts of Apple Watch, featuring touching stories in many of its event hype videos. It’s easy to see what a difference the wearable has made.
Everyone knows about a lot of these major features, such as the ability to take an ECG, call emergency services in a car crash, or track your workouts throughout the day. This though, is only touching the surface.
Apple has distributed its fourth developer beta build of visionOS 2.2, giving the Apple Vision Pro’s operating system more testing time ahead of a public release.
Following the Thanksgiving weekend, Apple had a more relaxed beta release than usual on Tuesday, with the first build in this round being visionOS 2.2’s fourth iteration.
The third round of this version of visionOS was handed out to developer testers on November 18.
An exclusive discount is in effect on Apple’s 14-inch MacBook Pro M3 with a bump up to 16GB of RAM, now priced at $1,349 with a blowout deal.
Apple’s M3 MacBook Pro is on sale for $1,349 this week only.
With other retailers selling the closeout 8GB model for $1,299 — it’s worth spending $50 more to get an upgrade to 16GB of RAM to help future-proof your purchase.
Pick up the M3 14-inch model with 16GB unified memory and 512GB of storage for just $1,349* at Apple Authorized Reseller B&H Photo when you shop through the pricing links in this post from a laptop or desktop (mobile apps are not supported at this time).
Flash deal: Apple’s 14-inch MacBook Pro with 16GB RAM on sale for $1,349
Originally appeared here:
Flash deal: Apple’s 14-inch MacBook Pro with 16GB RAM on sale for $1,349
We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept”, you consent to the use of ALL the cookies.
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
Cookie
Duration
Description
cookielawinfo-checkbox-analytics
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".
cookielawinfo-checkbox-functional
11 months
The cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".
cookielawinfo-checkbox-necessary
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
cookielawinfo-checkbox-others
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.
cookielawinfo-checkbox-performance
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".
viewed_cookie_policy
11 months
The cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.
Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.
Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.
Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.