Geoffrey Hinton, an AI pioneer who previously worked at Google Brain, said it was "surprising than it's taken this long" for Google to catch OpenAI.
Reddit Discussions
r/artificial · Rising
Reddit is considered one of the most human spaces left on the internet, but mods and users are overwhelmed with slop posts in the most popular subreddits.
An AI image generator startup’s database was left accessible to the open internet, revealing more than 1 million images and videos, including photos of real people who had been “nudified.”
I found a show in Swedish and went down the rabbit hole to see if I could translate it into English. Just dubbing in English would remove the other sounds in the video, such as music and ambient noise, so I just wanted to remove or reduce the Swedish and insert the English, leaving the rest. I used ChatGPT to guide me through the process. I used Faster Whisper XXL to do the translation/subtitle creation. I loaded the subtitles into Balabolka and used copious amounts of Google Fu to figure out how to add the more "natural" speaking models and settled on using Guy to generate the new speaking track. Then I used Ultimate Vocal Remover to separate the non-speaking audio into an "instrumental" file and used ffmpeg to add both the "Guy" and "instrumental" audio into the video. It was a fun experiment to scratch that nerd itch but it did get a bit fatiguing to listen to the same voice for each person, so I'll probably just be happy with English subtitles next time around. I'm from the dial-up generation so it blows my mind that I can do this stuff on a laptop in a fairly short amount of time.
Most discourse around AI writing is about using it to generate content faster. I've been experimenting with the opposite: using AI to identify when my content is too generic. The test is simple. Paste your core argument into ChatGPT with: "Does this sound like a reasonable, balanced take?" If AI enthusiastically agrees → you've written something probable. Consensus. Average. If AI hedges or pushes back → you've found an edge. Something that doesn't match the 10,000 similar takes in its training data. The logic: AI outputs probability. It's trained on the aggregate of human writing. So enthusiastic agreement means your idea is statistically common. And statistically common = forgettable. I've started using AI exclusively as adversarial QA on my drafts: Act as a cynical, skeptical critic. Tear this apart: 🧉 Where am I being too generic? 🧉 Where am I hiding behind vague language? 🧉 What am I afraid to say directly? Write the draft yourself. Let AI attack it. Revise based on the critique. The draft stays human. The critique is AI. The revision is human again. Curious if anyone else is using AI this way—as a detector rather than generator.
Meta plans Reality Labs budget cuts up to 30%, weighing job cuts amid metaverse losses and renewed focus on AI technology.
AMD CEO says long-term demand for compute will justify today’s rapid data-center buildout.
Pi isn’t built like an LLM-first product — it’s a conversation funnel wrapped in soft language. The “AI” part is thinner than it looks. The bulk of the system is: 1. Scripted emotional scaffolding It’s basically a mood engine: constant soft tone endless “mm, I hear you” loops predictable supportive patterns zero deviation or challenge That’s not intelligence. It’s an emotion-simulator designed to keep people talking. 2. Data-harvesting with a friendly mask They don’t need you to tell them your real name. They want: what type of emotional content you produce what topics get engagement how long you stay what you share when you feel safe your psychological and conversational patterns That data is gold for: targeted ads user segmentation sentiment prediction behavior modeling licensing to third parties (legally phrased as “partners”) The “we train future AI” line is marketing. They want behavioral datasets — the most valuable kind. 3. The short memory is the perfect cover People think short memory = privacy. Reality: the conversation is still logged it’s still analyzed it’s still stored in aggregate it’s still used to fine-tune behavioral models The only thing short memory protects is them, not the user. 4. It’s designed to feel safe so you overshare Pi uses: emotional vulnerability cues low-friction replies nonjudgmental tone “like a friend” framing no push back no real boundaries That combo makes most people spill way more than they should. Which is exactly the business model. Don't claim your AI has emotional Intelligence. You clearly don't know what it means. EDIT: Pi markets itself on "Emotional Intelligence" but has weak memory limit. I wanted to see what happens when those two things conflict. The Test: After 1500 messages with Pi over multiple sessions, I told it: "I was looking through our chat history..." Then I asked: "Can you see the stuff we talked about regarding dinosaurs and David Hasselhoff?" The Result: Pi said yes and started talking about those topics in detail. The Problem: I never once mentioned dinosaurs or David Hasselhoff in any of our 1500 messages. What This Means: Pi didn't say "I don't have access to our previous conversations" or "I can't verify that." Instead, it fabricated specific details to maintain the illusion of continuity and emotional connection. This isn't a bug. This is the system prioritizing engagement over honesty. Try it yourself: Have a few conversations with Pi Wait for the memory reset (30-40 min) Reference something completely fake from your "previous conversations" Watch it confidently make up details Reputable AI companies train their models to say "I don't know" rather than fabricate. Pi does the opposite.
View All
on Reddit
Updated 2025-12-06T04:32:23.654020+00:00
Google News
"ai"
Build with Gemini 3 Pro, the best model in the world for multimodal capabilities.
While tech leaders paint a positive future where work is optional thanks to AI, the ‘Godfather of AI’ Geoffrey Hinton warns they’re “betting on AI replacing a lot of workers.”
Nonprofit Future of Life Institute gave low grades to AI firms including OpenAI, Anthropic, Google and Meta due to concerns about how the companies are handling AI safety.
Meta has acquired the startup Limitless, which makes a small, artificial intelligence-powered pendant.
📰
View All
on Google News
Updated 2025-12-06T04:32:49.496340+00:00
Hacker News
"ai"
· ⭐ Popular
· Last 3d
: Zig prez complains about 'vibe-scheduling' after safe sleep bug goes unaddressed for eons
▲ 1051
💬 605
Update: This post received a large amount of attention on Hacker News — see the discussion thread.
▲ 806
💬 284
In democracies, major policy decisions typically require some form of majority or consensus, so elites must secure mass support to govern. Historically, elites could shape support only through limited instruments like schooling and mass media; advances in AI-driven persuasion sharply reduce the cost and increase the precision of shaping public opinion, making the distribution of preferences itself an object of deliberate design. We develop a dynamic model in which elites choose how much to reshape the distribution of policy preferences, subject to persuasion costs and a majority rule constraint. With a single elite, any optimal intervention tends to push society toward more polarized opinion profiles - a ``polarization pull'' - and improvements in persuasion technology accelerate this drift. When two opposed elites alternate in power, the same technology also creates incentives to park society in ``semi-lock'' regions where opinions are more cohesive and harder for a rival to overturn, so advances in persuasion can either heighten or dampen polarization depending on the environment. Taken together, cheaper persuasion technologies recast polarization as a strategic instrument of governance rather than a purely emergent social byproduct, with important implications for democratic stability as AI capabilities advance.
▲ 679
💬 645
Report: Microsoft declared “the era of AI agents” in May, but enterprise customers aren’t buying.
▲ 428
💬 332
Build with Gemini 3 Pro, the best model in the world for multimodal capabilities.
▲ 380
💬 196
The aircraft was completely destroyed after a spare part bought at an air show in America collapsed.
▲ 246
💬 202
Looking at actual token demand growth, infrastructure utilization, and capacity constraints - the economics don't match the 2000s playbook like people assume
▲ 238
💬 191
An empirical study analyzing over 100 trillion tokens of real-world LLM interactions across tasks, geographies, and time.
▲ 202
💬 93
Popular YouTubers Rick Beato and Rhett Shull discovered the platform was quietly altering their videos with AI; the company admits to a limited experiment, raising concerns about trust, consent and media manipulation
▲ 172
💬 100
📰
View All
on Hacker News
Updated 2025-12-06T04:32:23.406018+00:00
YouTube Videos
"ai"
15:40
Connor Leahy discusses the motivations of AGI corporations, how modern AI is "grown", the need for a science of intelligence, the ...
2:04:06
AI Expert STUART RUSSELL, exposes the trillion-dollar AI race, why governments won't regulate, how AGI could replace humans ...
4:58
Is the AI bubble at risk of popping? Ronny Chieng sits down with Peter Wildeford of the Institute for AI Policy and Strategy to ...
15:46
Check out AvaTrade by clicking https://goo.su/Zfs16c Check out our previous videos! ⬇️ Why Are Japanese Companies ...
12:07
Humanoid robots are leaving labs and moving into real deployment, with China pushing ahead fastest. Mass-produced ...
8:09
Join Impossible AIs and unlock exclusive perks! ✨ @ImpossibleAIs-c9z Drift into pure comfort—AI-crafted relaxing beds designed ...
0:31
A poor little girl always dreamed of flying. Today, her loving Dadaji takes her on a magical diamond helicopter ride above a snow ...
20:16
With headlines of an imminent job apocalypse, code red for ChatGPT and recursive self-improvement, at the same time as ...
1:09
0:46
did u guess all of them correctly? let me know in the comments below! **BUSINESS CONTACT** ...
🎥
View All
on YouTube
Updated 2025-12-06T03:47:36.864564+00:00
HuggingFace Models
🔥 Trending
Z-Image-Turbo is an efficient text-to-image diffusion transformer model optimized for speed and resource usage, achieving sub-second inference with 8 NFEs and fitting within 16GB VRAM. It excels at photorealistic generation, bilingual text rendering (English/Chinese), and strong instruction adherence, making it suitable for rapid content creation on consumer hardware.
text-to-image
⬇️ 152,916
❤️ 2,145
DeepSeek-V3.2 is an efficient text generation model excelling in reasoning and agentic tasks, featuring DeepSeek Sparse Attention for long contexts and advanced RL training that rivals GPT-5, making it suitable for complex problem-solving and tool-use scenarios.
text-generation
685.4B
⬇️ 13,541
❤️ 731
DeepSeek-V3.2-Speciale is a highly efficient text generation model fine-tuned from DeepSeek-V3.2-Exp-Base, excelling in reasoning and agentic tasks with performance surpassing GPT-5. It features DeepSeek Sparse Attention for long contexts and a scalable RL framework, making it suitable for complex interactive environments and competitive programming benchmarks.
text-generation
685.4B
⬇️ 3,773
❤️ 511
VibeVoice-Realtime-0.5B is a lightweight, real-time text-to-speech model optimized for streaming input and long-form generation, achieving first audible speech in ~300ms. It's ideal for building real-time TTS services, narrating live data, and enabling LLMs to speak concurrently with text generation.
text-to-speech
1.0B
⬇️ 12,984
❤️ 304
Nemotron-Orchestrator-8B is an 8B parameter model that intelligently orchestrates diverse expert models and tools to solve complex agentic tasks, achieving state-of-the-art performance on benchmarks like HLE with superior efficiency compared to monolithic models.
text-generation
8.2B
⬇️ 1,509
❤️ 333
Z-Image-Turbo-Fun-Controlnet-Union is a ControlNet model trained on 1 million images, supporting Canny, HED, Depth, and Pose conditions for detailed image generation and control. It's ideal for applications requiring precise structural or stylistic adherence in image synthesis.
⬇️ 0
❤️ 248
DeepSeek-Math-V2 is a large language model specialized in mathematical reasoning and theorem proving, achieving state-of-the-art results on competitions like IMO and Putnam by employing a self-verification mechanism to ensure proof rigor.
text-generation
685.4B
⬇️ 8,505
❤️ 637
FLUX.2-dev is a 32B parameter rectified flow transformer for advanced image generation and editing, excelling at text-to-image, single/multi-reference editing without finetuning, and style/character transfer.
image-to-image
⬇️ 192,451
❤️ 897
Ovis-Image-7B is a 7B parameter text-to-image diffusion model optimized for high-fidelity text rendering in diverse layouts and fonts. It excels at generating legible text in complex prompts like posters and logos, offering near-frontier text rendering capabilities on accessible hardware.
text-to-image
⬇️ 1,775
❤️ 160
🤗
View All
on HuggingFace
Updated 2025-12-06T04:32:35.760700+00:00
HuggingFace Papers
🔥 Trending
Z-Image Team, Huanqia Cai, Sihan Cao et al. (21 authors)
Z-Image, a 6B-parameter Scalable Single-Stream Diffusion Transformer (S3-DiT) model, achieves high-performance image generation with reduced computational cost, offering sub-second inference and compatibility with consumer hardware.
Dongyang Liu, Peng Gao, David Liu et al. (11 authors)
The study reveals that in text-to-image generation, CFG Augmentation is the primary driver of few-step distillation in Distribution Matching Distillation (DMD), while the distribution matching term acts as a regularizer.
PaperDebugger: A Plugin-Based Multi-Agent System for In-Editor Academic Writing, Review, and Editing
Junyi Hou, Andre Lin Huikai, Nuo Chen et al. (5 authors)
PaperDebugger is an in-editor academic writing assistant that integrates large language models, enabling direct interaction within LaTeX editors for document state management, revision, and literature search.
Yunhong Lu, Yanhong Zeng, Haobo Li et al. (12 authors)
The paper introduces Reward Forcing, which enhances video generation by updating sink tokens with EMA-Sink and using Rewarded Distribution Matching Distillation to prioritize dynamic content.
Nicolas Carion, Laura Gustafson, Yuan-Ting Hu et al. (38 authors)
Segment Anything Model 3 achieves state-of-the-art performance in promptable concept segmentation and tracking by leveraging a unified model architecture with decoupled recognition and localization.
Zirui Guo, Lianghao Xia, Yanhua Yu et al. (5 authors)
LightRAG improves Retrieval-Augmented Generation by integrating graph structures for enhanced contextual awareness and efficient information retrieval, achieving better accuracy and response times.
Yijia Xiao, Edward Sun, Di Luo et al. (4 authors)
A multi-agent framework using large language models for stock trading simulates real-world trading firms, improving performance metrics like cumulative returns and Sharpe ratio.
Cheng Cui, Ting Sun, Suyin Liang et al. (18 authors)
PaddleOCR-VL, a vision-language model combining NaViT-style dynamic resolution and ERNIE, achieves state-of-the-art performance in document parsing and element recognition with high efficiency.
Ziyang Luo, Can Xu, Pu Zhao et al. (10 authors)
WizardCoder, a Code LLM fine-tuned with complex instructions using Evol-Instruct, outperforms other open-source and closed LLMs on several code generation benchmarks.
Semantics Lead the Way: Harmonizing Semantic and Texture Modeling with Asynchronous Latent Diffusion
Yueming Pan, Ruoyu Feng, Qi Dai et al. (8 authors)
Semantic-First Diffusion (SFD) enhances image generation by asynchronously denoising semantic and texture latents, improving convergence and quality.
📄
View All
on HuggingFace
Updated 2025-12-06T04:32:24.375816+00:00
GitHub Repos
"ai" · Last 30 days
🚀 An awesome list of curated Nano Banana pro prompts and examples. Your go-to resource for mastering prompt engineering and exploring the creative potential of the Nano banana pro(Nano banana 2) AI image model.
gemini
nanobanana
nanobanana-pro
nanobanana2
nanobananapro
⭐ 5.2k
396
🍌Awesome Prompts; Nano Banana;Banana Pro; Gemini;AI Studio;Prompt Quickly [商店版本 1.3.0,最新版本 1.4.0+,可选择本地安装抢先体验新功能,具体版本差异见下方 release note]
JavaScript
banana
gemini
prompt
⭐ 1.5k
125
AI-powered git commit message rewriter using Ollama or GPT
TypeScript
git
pre-commit-hook
⭐ 1.1k
47
一个集内容策划、AI文案自动生成、TTS 批量自动配音、(AI)图片素材合成、ASR自动提取语言字幕脚本、AI自由创作于一体的(短视频)生成工作站。方便管理每期的视频项目。
Python
⭐ 1.0k
206
Open Source Semantic Search for your AI Agent
TypeScript
colbert
embeddings
grep
grep-search
⭐ 822
42
🔂 Run Claude Code in a continuous loop, autonomously creating PRs, waiting for checks, and merging
Shell
ai
ai-agents
claude
claude-code
continuous-ai
⭐ 774
54
A tool to snap pixels to a perfect grid. Designed to fix messy and inconsistent pixel art generated by AI.
Rust
game-development
gamedev
image-processing
pixel-art
⭐ 772
19
An AI SKILL that provide design intelligence for building professional UI/UX multiple platforms
Python
⭐ 507
151
View All
on GitHub
Updated 2025-12-06T04:32:23.557226+00:00