Friday, August 1, 2025

w31 2025

 ▒_________
WARM UP

▼ Google Slides has inverted ARROW and SHIFT+ARROW

Booth, Octagonal City, 
▼ https://www.theguardian.com/news/gallery/2025/jul/25/an-octagonal-city-and-the-closed-acropolis-photos-of-the-day-friday

Polar bears in shopping malls, fibreglass mountains, indoor rainforests: how humans got addicted to faking the natural world
▼ https://www.theguardian.com/environment/2025/jul/24/humans-addicted-faking-natural-world-anthropocene-illusion-zed-nelson-aoe

Giant trolls have a message for humans about protecting the planet
▼ https://apnews.com/article/giant-troll-sculptures-filoli-thomas-dambo-15a070f5eba15e3a74721a7c2d23eb6e

Wet Wedding
▼ https://apnews.com/photo-gallery/philippines-flood-wedding-barasoain-church-rain-couple-818e5b8834289c29846880999d323ea9

Kim Kardashian
▼ https://assets.rebelmouse.io/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpbWFnZSI6Imh0dHBzOi8vYXNzZXRzLnJibC5tcy8xODkyNjY3MC9vcmlnaW4uanBnIiwiZXhwaXJlc19hdCI6MTc5NzQ1MTE2OH0.dX_frx-0WOi8J5JXR-aj9jZzTIukNLfQidqdcnNkxlc/img.jpg?width=600&quality=100
ˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉɸ

▒______________
SERAPHINNE VALLORA

▼ https://www.bbc.com/news/articles/cgeqe084nn4o

▼ https://www.instagram.com/seraphinnevallora

▼ https://www.seraphinnevallora.com
ˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉɸ

▒_________________
Android Earthquake Alerts (AEA)

▼ https://www.bbc.com/news/articles/c77v2kx304go
ˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉɸ

▒______________
SAM ALTMAN WARNING

▼ https://apnews.com/article/openai-ceo-sam-altman-fed-ad87262a4c1e71a0695ff6d06a2586f2

▼ o4-mini:
Here are some of the key topics Sam Altman has previously warned about:

1. Existential risk from unaligned AGI  
   - He’s cautioned that without strong alignment research and safety measures, a future artificial general intelligence (AGI) could act in ways that are harmful or uncontrollable.  
   - Altman has called for increased investment in alignment research to ensure that AGI’s goals remain aligned with human values.

2. AI “arms race” and the need for regulation  
   - He has argued that a global race to build ever-more-powerful AI—without shared safety standards—could lead to dangerous shortcuts on safety.  
   - To counter this, he’s advocated for well-designed government regulation, international treaties, and industry cooperation.

3. Disinformation and deepfakes  
   - Early on, Altman warned that as AI-generated audio, video, and text become more convincing, they could be used to amplify political manipulation, fraud, and social discord.  
   - He’s urged social platforms, researchers, and policymakers to invest in detection tools and content-verification standards.

4. Job displacement and economic impacts  
   - He’s spoken about how advanced AI systems may automate large swaths of white- and blue-collar work, potentially creating major economic disruption.  
   - Altman has proposed ideas like universal basic income (UBI) or other social-safety net expansions to help societies adapt.

5. Concentration of power in a few large AI labs  
   - Altman has warned that if only a handful of well-funded organizations control the most capable AI systems, it could lead to imbalances in economic and geopolitical power.  
   - He’s supported broader access to compute and open research efforts to democratize AI capabilities.

6. Dual-use concerns  
   - He’s highlighted that powerful AI tools can be used for both beneficial applications (medicine, climate modeling) and harmful ones (biothreat design, cyber-attacks).  
   - Altman has encouraged researchers to adopt responsible-disclosure practices and for labs to maintain rigorous internal governance.

Overall, Altman’s public warnings have ranged from near-term issues—like fraud and deepfakes—to longer-term governance and existential-risk concerns around AGI.
ˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉɸ