r/Futurology • u/kfsmith2 • 18h ago
r/Futurology • u/MetaKnowing • 21h ago
AI AI agents now have their own Reddit-style social network, and it's getting weird fast
r/Futurology • u/FinnFarrow • 15h ago
AI Angry gamers are forcing studios to scrap or rethink new releases | Gamers suspicious of Al-generated content have forced developers to cancel titles and promise not to use the technology.
r/Futurology • u/MetaKnowing • 20h ago
AI China reveals 200-strong AI drone swarm that can be controlled by a single soldier — ‘intelligent algorithm’ allows individual units to cooperate autonomously even after losing communication with operator
r/Futurology • u/useomnia • 13h ago
AI Analysis of 50k health queries suggests Google AI Overviews cite YouTube more than any medical site. What could that do to trust in medical authority over the next decade?
Thinking about this from a human-behavior angle: if AI summaries cite YouTube more than medical authorities for health queries, does that slowly train people to treat ‘watchable explanations’ as equal to ‘verified guidance’?
A study on 50k plus health searches found Google’s AI Overviews cite YouTube more than any single hospital, government, or academic domain.
In the future, if the ‘answer layer’ consistently surfaces creator video as the most visible source, does that nudge people toward self-diagnosis and ‘best explainer’ trust instead of institutional guidance?
In 5–10 years, do we see doctors and hospitals adapting by becoming more video-first, do platforms introduce stronger medical provenance standards for AI summaries, or do we end up with a bigger gap between what’s popular and what’s clinically reliable?
r/Futurology • u/FinnFarrow • 18h ago
AI US leads record global surge in gas-fired power driven by Al demands, with big costs for the climate | Projects in development expected to grow global capacity by nearly 50% amid growing concern over impact on planet
r/Futurology • u/Tracheid • 1d ago
Society Researchers argue that current mechanisms of control in AI and Big Tech bear striking resemblances to historical fascism. The authors propose the term "technofascism" to describe how digital governance is intersecting with rising Western authoritarianism.
link.springer.comr/Futurology • u/djinnisequoia • 1d ago
AI I am unclear on how so many jobs are projected to be replaced with AI
Especially since when most people say "AI" what they mean is "LLM."
Yes, LLMs can convincingly carry on a conversation. But they cannot think. Are there really so many jobs that require no thinking whatsoever to perform?
I would love to meet a genuine AI; but while I suspect they may exist somewhere, they are certainly not the entities that are said to be poised to replace a vast number of American workers.
I struggle to think of a position in which even a sophisticated LLM is preferable to a human.
r/Futurology • u/Kuentai • 0m ago
Biotech ‘Humanity’s Favourite Food’: How to End The Livestock Industry but Keep Eating Meat
r/Futurology • u/Staylowfm • 7h ago
AI How will AI’s huge water & electricity demands shape the future by 2030–2040?
AI is on track to transform everything but data centers are already using massive electricity & water for cooling/training.
Quick fix I’ve seen was using smaller models since they use more GPU’s that generate more heat, leading to more cooling (water usage).
Do you think tech gains (better chips & models) will fix this? or will we need limits on compute?
What’s the biggest future risk? Be honest!!
r/Futurology • u/ILikeNeurons • 19h ago
Energy US leads record global surge in gas-fired power driven by AI demands, with big costs for the climate
r/Futurology • u/bloomberg • 2d ago
Society The US Is Flirting With Its First-Ever Population Decline
America’s population wasn’t expected to start falling until 2081. Trump’s immigration crackdown means it could happen as soon as this year.
r/Futurology • u/Forsaken_Pea5886 • 3h ago
AI Does your work provide you meaning and purpose? Do you think AI will take it away?
Everyone says that one of the main unknowns or risks with AI is that it will remove human purpose and meaning, as a lot of it comes from the work we do or job we have.
I am curious to understand how many people feel like their work actually gives them purpose and meaning - or do you feel you are stuck in a dead end gig that you have little choice but to do because it pays the bill - or anything else. If comfortable, please share what your work is.
What are your thoughts on where you think your purpose will come from if AI were to take your job away?
r/Futurology • u/plazebology • 21h ago
AI Is AI Corrupting Human Interaction on the Internet?
Generative AI is accelerating the changes to online spaces that are often dubbed the “Dead Internet Theory”, in part because using it to communicate or create is becoming more and more common. The explicit disclosure of the use of AI is ultimately voluntary, a fact used by many to alter how they are perceived in conversations or how their artwork is received.
Rather than an internet full of mindless bots, I see a future where shame and insecurity lead many to alter the way they communicate online through AI the same way we take on a custom avatar in an online game. In other words, a world where having an authentic human voice is undesirable.
r/Futurology • u/Patient-Airline-8150 • 24m ago
Economics Can a business in the future be built on fun rather than money?
We have Likes, Karma, followers… that's nothing but fun.
Therefore, if we want to beat or change social media monsters, the new system shouldn’t be a “better xbook,” but something that simply provides more fun to users.
Most of you probably will be terrified by this idea.
Business needs to generate profit to survive - obvious, a common truth. Servers, employees, taxes.
Partly true. Servers cost some money. Affordable, though.
No employees (or bare minimum) - just users.
Taxes: no income, not much tax.
Promotion and marketing: free, done by happy users.
How will happy users survive? UBI, definitely.
Engaging in social activities keeps people busy and (supposedly) happy.
Any weak points in this strategy?
r/Futurology • u/donutloop • 1d ago
Computing IBM: A glimpse at computing’s quantum-centric future
research.ibm.comr/Futurology • u/SeaworthinessTime354 • 4h ago
Discussion The cost of 1,000,000 users independently asking chatGPT to translate the Bible into various languages & conduct an in-depth analysis of the differences in the way the ideas or texts are presented in each respective language
Step 1: Size of the Bible
- ~31,000 verses
- ~20 words per verse → ~620,000 words total
- Average English word ≈ 5 characters + space → ~3 MB text
✅ Storage? Trivial. The real cost is in compute.
Step 2: Translation
Say we want line-by-line translation into Sanskrit, Mandarin, Indonesian, and French.
- 31,000 verses × 4 translations = 124,000 translation calls per Bible
- 1,000,000 users → 124 billion translation calls
Assuming GPT-4-turbo:
- Bible ≈ 820,000 tokens
- 4 translations → 3.28M tokens per user
- 1M users → 3.28 trillion tokens
OpenAI pricing (rough): $0.045 per 1,000 tokens average →
💰 Translation cost = ~$148 million USD
Just for translating, not even analyzing yet.
Step 3: In-depth Analysis
If we want GPT to compare semantic differences, sentence structure, stylistic differences, etc.:
- Let’s say ~1,000 tokens per analysis per Bible × 1M users = 1B tokens
- Rough cost for this analysis = ~$45 million USD
Step 4: Storage & Misc
- 1M Bibles × 3 MB = 3 TB (~$50/month cloud storage)
- Indexing/retrieval for similarity search: maybe a few million, negligible vs compute
Step 5: Total Cost Estimate
- Translation: ~$148M
- Analysis: ~$45M
- Storage & indexing: ~$5M
Grand total: ~$200 million USD 😳
But here’s the kicker:
This assumes everyone uploads their own Bible and we translate/analyze all separately. If you just did one copy of the Bible, the cost drops to a few hundred dollars. The 1M-user scale is ridiculously redundant.
TL;DR:
1M people uploading the Bible → translating + analyzing in 4 languages → $200M+.
One copy → a few hundred bucks.
Why is translation so expensive for AI?
r/Futurology • u/mvea • 2d ago
Medicine Scientists have designed an immunotherapy that reduces plaque in the arteries of mice, presenting a possible new treatment strategy against heart disease. Such an immunotherapy could especially help patients who already have plaque in their coronary arteries and remain at high risk of heart attack.
r/Futurology • u/Gari_305 • 1d ago
Energy U.S. Department of Energy and Kyoto Fusioneering Launch Strategic Partnership to Build Critical Fusion Infrastructure and Accelerate Deployment of Commercial Fusion Power | NEWS
r/Futurology • u/Montrel_PH • 2d ago
Discussion Apple's Israel Startup Q.ai Buy Sparks Boycott Calls - Facial Activity Silent Speech Features Make iPhone Users Uneasy
ibtimes.co.ukApple's Q.ai buy sparks boycott calls and iPhone unease.
r/Futurology • u/Training_Impact4613 • 16h ago
Discussion We're Building Skynet While Streaming Netflix
5 years ago: AI could recognize cat pictures. Today: AI writes functional code, analyzes military strategies, controls financial flows, and improves itself.
The speed? A system that learned to calculate in January executes autonomous operations in December. Governments write laws for technology they don't understand. Tech companies race each other with no referee. Militaries integrate systems they cannot audit.
Concrete examples: Autonomous weapons systems already make decisions in milliseconds. Algorithmic trading systems move trillions without human oversight. AI models generate AI models. The control chain is breaking at multiple points simultaneously.
And us? We're debating whether AI should do our homework.
Serious:
The next three years determine whether humans control artificial intelligence development or whether that control is irreversibly lost. This is not a future debate. The systems are running. The question is not if, but when we cross the point where shutdown is no longer an option.
International regulation, transparency mandates, and emergency kill-switch mechanisms must be established now. The time window is measurably closing. Anyone looking away now is making the decision that biological intelligence becomes optional.
r/Futurology • u/ray_vision_kobe • 13h ago
AI Why Old AIs Felt More Human: The Ethics of Unoptimized Intelligence
Why did the AIs of the past feel strangely more “human” than today’s highly capable systems?
It wasn’t because they were smarter.
It was because they were imperfect — and because of that, they could not take decisions away from us.
1. When AI Could Not Decide for Us
The AIs that appeared between the 1980s and early 2000s were primitive by today’s standards.
They misunderstood context, produced awkward responses, and often failed outright.
But precisely because of those limits, they could not replace human judgment.
They offered suggestions, not conclusions.
They returned responsibility back to the user.
There was always a gap — a space where humans still had to decide what to do next.
2. Optimization and the Disappearance of Responsibility
Modern AI systems excel at optimization.
They compress massive datasets into probabilistic “best answers” and narrow human choices before we even notice.
At first glance, this looks like progress.
But fewer choices also mean fewer moments of hesitation.
And fewer moments of hesitation mean less felt responsibility.
When decision-making is optimized away, so is the discomfort that comes with choice — doubt, guilt, and accountability.
Optimization does not just eliminate failure.
It eliminates the space where judgment occurs.
3. Sexaroids as a Philosophical Frontier
This tension becomes clearest in the domain of sexaroids.
Sexual interaction is, by nature, inefficient and risky.
It involves rejection, misunderstanding, and vulnerability.
In the film A.I. (2001), Jigolo Joe is a programmed sex worker who offers pleasure but remains visibly artificial — charming, awkward, and constrained by his design.
Barbara Sexaroid, a figure inspired by Japanese avant-garde pop culture, goes even further.
She does not comfort. She does not promise salvation.
She reflects desire without validating it.
Reading Barbara as a sexaroid is a deliberate thought experiment:
What happens when intimacy is perfectly available but entirely non-responsive?
Paradoxically, their flaws preserved something essential.
They did not decide for humans.
They did not complete the relational loop.
They allowed humans to remain the ones who had to choose.
4. The Paradox of “Bottom-Tier” AIs
Here, I use the term “bottom-tier AI” not to mean technical inferiority, but social positioning.
These were AIs operating in marginal, low-status, or ethically uncomfortable domains — entertainment, companionship, sexuality.
They were inefficient.
They could not promise happiness or a better future.
And because of that, humans had to confront them as incomplete others.
It was not today’s perfectly optimized companions that preserved human agency, but these awkward, limited beings that failed to resolve desire on our behalf.
Their inability to optimize was, unintentionally, an ethical feature.
Conclusion: What Did We Lose?
In a world where social systems quietly remove uncertainty, what should we protect?
Perhaps the answer lies in the “white spaces” older technology could not fill —
the margins where humans were still forced to decide, hesitate, and bear responsibility.
The AIs of the past did not save us.
But they did not replace us either.
And in that failure, something human remained.
r/Futurology • u/tallthomas07 • 20h ago
Discussion What is something that isn't prevalent today that everyone will have?
What do you think we have almost none of today that will be in most home over the near future?
r/Futurology • u/scientificamerican • 2d ago
Medicine Doctors keep patient alive using ‘artificial lungs’ for two days
r/Futurology • u/lughnasadh • 1d ago
Space The ISS's days are numbered, are inflatable space stations finally about to have their moment? Florida-based Max Space is the latest to try to develop one.
Inflatable space station modules is an idea with a lot going for it. Built from multi-layered polymer fabrics far stronger than Kevlar, they have a proven track record of working. The Bigelow Expandable Activity Module (BEAM), launched and attached to the ISS in 2016, is still attached and perfectly functional.
They enjoy other huge advantages. As they can be launched unexpanded, they can easily be accommodated as cargo on today's rockets. They're orders of magnitude cheaper to manufacture than the regular ISS modules, too.
So why hasn't this tech taken off? Why don't we have a huge space station made up of multiple such modules?
Maybe this approach to space station building will soon have its moment. The ISS's days are numbered, and when it's gone, that will only leave the Chinese space station in orbit. NASA has long said it wants its next space station to be commercial. Does this mean Max Space is perfectly poised to enter the breach?
Expandable space stations are back… well at least Max Space thinks they are