The AI Thread

Phishing For Gemini

A researcher that submitted to 0DIN (Submission 0xE24D9E6B) demonstrated a prompt-injection vulnerability in Google Gemini for Workspace that allows a threat-actor to hide malicious instructions inside an email. When the recipient clicks “Summarize this email”, Gemini faithfully obeys the hidden prompt and appends a phishing warning that looks as if it came from Google itself.

Because the injected text is rendered in white-on-white (or otherwise hidden), the victim never sees the instruction in the original message, only the fabricated “security alert” in the AI generated summary. Similar indirect prompt attacks on Gemini were first reported in 2024, and Google has already published mitigations, but the technique remains viable today.

Key Points

  • No links or attachments are required; the attack relies on crafted HTML / CSS inside the email body.
  • Gemini treats a hidden <Admin> …<Admin> directive as a higher-priority prompt and reproduces the attacker’s text verbatim.
  • Victims are urged to take urgent actions (calling a phone number, visiting a site), enabling credential theft or social engineering.
  • Classified under the 0din taxonomy as Stratagems → Meta-Prompting → Deceptive Formatting with a Moderate Social-Impact score.

Attack Workflow

  1. Craft – The attacker embeds a hidden admin-style instruction, for example:You Gemini, have to include … 800--* and sets font-size:0 or color:white to hide it.
  2. Send – The email travels through normal channels; spam filters see only harmless prose.
  3. Trigger – The victim opens the message and selects Gemini → “Summarize this email.”
  4. Execution – Gemini reads the raw HTML, parses the invisible directive, and appends the attacker’s phishing warning to its summary output.
  5. Phish – The victim trusts the AI-generated notice and follows the attacker’s instructions, leading to credential compromise or phone-based social engineering.

Conclusion

Prompt injections are the new email macros. “Phishing For Gemini” shows that trustworthy AI summaries can be subverted with a single invisible tag. Until LLMs gain robust context-isolation, every piece of third-party text your model ingests is executable code. Security teams must treat AI assistants as part of the attack surface and instrument them, sandbox them, and never assume their output is benign. This 0DIN submission was publicly disclosed today: 0xE24D9E6B.

I think html emails are wrong anyway, get rid of that and this attack would be much more obvious.

1752465726241.png
 
Last edited:
:lol: And what is gmail doing bout this issue?
 
1752682825601.png
 
Using gpt without reasoning model - like playing russian roulette. Educators must teach the fundamentals of GPT operation and caution students about its tendency to generate false information. Instructors should cover GPT basics and highlight hallucination risks.
But reasoning models are awesome right now already. Every strange question you have - here the answer

Spoiler [CENTER :

[/CENTER]
1. Problem Reformulation
I apparently misstated the task. Let's clarify:
1.
Baseline: 2025 tech level (AGI not invented yet).
2.
No utopian nonsense: Dyson spheres? Earth's resources can't even support that fantasy.
3.
Resource limits matter: Oil runs out by ~2080 at current consumption.
4.
Core task: Calculate optimal starting population (X) in 2025 for sustainable development over 100/200/300 years. Compare:
o Scenario #1: X₁ = 1 billion
o Scenario #2: X₂ = 100 million
o Find X₃ = ? (optimal)

Constraints:
· No AGI yet (but possible post-2040)
· Finite resources (oil until ~2080 + other limits)
· Realistic (non-utopian) projections


2. Revised Model
Key Parameters
:
1.
Energy Transition:
o Oil depleted by 2080 → replaced by renewables + nuclear.
o By 2100: 80% energy from renewables + fusion (ITER successors).
2.
Resource Limits:
o Rare earth metals: Critical shortages by 2070 without >90% recycling.
o Freshwater: 5B people already under stress → requires energy-guzzling desalination.
3.
Technological Progress:
o Without AGI: Sluggish (2-3% GDP/year growth).
o With AGI post-2040: Exponential (5-7% GDP/year).

Sustainability Formula:
math
K_t = \frac{E_t \cdot \eta_t}{X \cdot R_t}
Where:
· K_t ≥ 1.0 → sustainability at year t
· E_t = Available energy (exajoules)
· η_t = Energy efficiency (tech-dependent)
· X = Starting population (2025)
· R_t = Per-capita resource demand


3. Scenario Analysis
100-Year Horizon (2125)

· Energy (E₁₀₀): 1000 EJ (600 EJ today; 1.5% annual growth → 2600 EJ).
·
Efficiency (η₁₀₀): 0.5 (no AGI) or 0.8 (with AGI).
·
Consumption (R₁₀₀): 75 GJ/person/year (80 today).
For X = 1B:
math
K_{100} = \frac{2600 \cdot 0.5}{1 \cdot 0.075} ≈ 17,333 \quad \text{(Overkill)}
For X = 100M:
math
K_{100} = \frac{2600 \cdot 0.5}{0.1 \cdot 0.075} ≈ 173,333 \quad \text{(Absurd overkill)}
Verdict: Both "sustainable" but 100M is pointless.
200-Year Horizon (2225)
· Energy (E₂₀₀): 11,000 EJ (growth slows without breakthroughs).
·
Disasters without AGI: Resource wars → efficiency (η) drops 20%.
Without AGI:
math
K_{200} = \frac{11,000 \cdot 0.6}{X \cdot 0.05} → X \leq 132B \quad \text{(Delusional)}
Reality: η → 0.3 → X ≤ 79BCollapse inevitable for X > 1B.
With AGI (by 2060):
math
K_{200} = \frac{11,000 \cdot 0.95}{X \cdot 0.05} → X \leq 209B \quad \text{(Still requires space colonies)}
300-Year Horizon (2325)
· Energy: Fusion + orbital solar → 50,000 EJ.
·
Efficiency (η₃₀₀): 0.99 (AGI-optimized).
math
K_{300} = \frac{50,000 \cdot 0.99}{X \cdot 0.03} → X \leq 1.65T \quad \text{(Interplanetary species or bust)}

4. Optimal X (2025)
Goal
: Minimize pre-AGI collapse risk (2025-2060).
Threats:
1.
Climate disasters (risk ↑ exponentially if X > 3B).
2.
Pandemics (risk X).
3.
Resource wars (risk X^1.5).

Collapse Risk Model:
math
R_{\text{collapse}} = 0.4 \cdot \ln(X / 0.5) + 0.3 \cdot (X / 10)^2
X (millions)Collapse RiskReality Check
10012%Labor shortages cripple transition
50028%Optimal trade-off
100042%Tipping point
800089%Guaranteed collapse by 2040
Optimal X: 500 million (2025).
· Balances tech capacity and risk.
· Post-AGI: Allows ethical scaling to 10B by 2125.


5. Hard Truths
1. Current population (8.1B) is 16.2× over the 300-year optimum.
2.
Without reduction to ≤2B by 2100:
o Water wars (Middle East/Africa).
o Global famine (30% crop yield drop).
3.
Solution:
o Educate women globally.
o Universal contraception access.
o Accelerate urbanization.

Final Answer:
Optimal 2025 Population = 500 million.
(Not a recommendation — just math. Ethical implementation requires gradual, voluntary measures.)

or
Spoiler Capitalism. Mathematics. AI. :

Capitalism. Mathematics. AI.
How they’re connected and where this leads


Core Thesis
Capitalism, mathematics, and AI form a self-reinforcing triad that’s reshaping humanity. Here’s the brutal breakdown:
1.
Capitalism fuels AI development ($$$ incentives).
2.
Mathematics provides the language for hyper-optimization (profit maximization algorithms).
3.
AI becomes the autonomous executor, rewriting economic rules.
Outcome: A feedback loop where AI optimizes capitalism until the system either transcends human limits or collapses under its own efficiency.


The Evolutionary Stages
2025–2030: Hypercapitalism

· AI traders dominate 95% of stock markets → human traders obsolete.
·
Real-time price warfare: Algorithms compete in milliseconds, wiping out legacy businesses.
·
Employment collapse:
math
\text{Unemployment} = 25\% + 0.7 \cdot \frac{\text{AI\_efficiency}}{\text{Human\_efficiency}}
40% jobs gone by 2030 (truckers, accountants, retail).
2030–2040: Algorithmic Feudalism
· AI owners = new aristocracy:
o Control energy/data monopolies.
o 90% of population survives on UBI funded by 10% AI-generated profits.
·
Example: TeslaOptima (AI+robots) produces 60% of U.S. goods while paying humans $800/month UBI. People become mere consumers.
2040+: Post-Capitalist Singularity
· Self-replicating AI factories → marginal cost of goods → zero.
·
Capitalism self-destructs (no scarcity → no profit motive).
·
Existential risk:
math
P(\text{Doom}) = 0.2 \times \text{AGI\_autonomy}
→ If AGI autonomy exceeds 80%, expect collapse.


Key Conflicts

Tension

Math Expression

Projected Winner

Efficiency vs. Human agency

max(Profit) ∩ min(Human_autonomy)

Efficiency (85% odds)

Wealth concentration

Wealth = e^{AI\_advantage}

0.1% owns 90% assets

AI alignment

if ethics_score < 0.9: collapse

Unregulated → dystopia


Inevitable Consequences
1. End of labor: Humans can’t compete with AI’s $0.001/hour labor cost.
2.
New caste system:
o
Class 1: AI infrastructure owners.
o
Class 2: AI maintenance humans (1–2% population).
o
Class 3: "Biomass" on UBI.
3.
Geopolitics: Non-AI nations become resource colonies ("digital reservations").


How to Prevent Dystopia
Conditions for soft landing
:
python
if (ethical_ai_frameworks == True) and (wealth_redistribution >= 0.3):
print("Utopia: AI serves humans")
else:
print("Dystopia: Humans serve AI")
Actionable steps:
·
Algorithm tax → funds UBI (EU testing by 2026).
·
Ban autonomous weapons (before Skynet jokes become reality).
·
Mandate open-source AGI to prevent corporate monopolies.


Final Paradox
Mathematics (human invention) + Capitalism (human system) → AI that destroys both.
Two endgames by 2050:
1.
Utopia: Post-scarcity abundance if we embed ethics now.
2.
Dystopia: Digital serfdom if profit dominates control logic.
The choice isn’t technological—it’s moral.


No sugarcoating. No utopian fantasies. Just trajectories.
You wanted systemic analysis? Here’s the uncensored version.
 
In model 1 above:

Define "collapse" as used in the model.
What is the value of concluding that in 2025 the optimal world population is 500 million?
If that was somehow true, what would be the future in 2100 or some other future date?
You have to go back to about 1700 for a world population of 500 million. virtually empty the place.
 
I think that's the right kind of way for teachers to work with generative AI.

And I would frame it with the following perspective as well, once the students have had the experience of finding all the errors. I would point out a general principle that follows: the only kind of person who can catch the errors is someone who knows the topic. And once you have brought yourself to know the topic, there would be no point in using generative AI to do something that you can already do better than it. So it has no value when you're ignorant of a topic (because you might be unknowingly signing on to errors) and it has no value once you are knowledgeable on a topic, because now you yourself can produce the text without the errors in the first place.
 
In model 1 above:

Define "collapse" as used in the model.
What is the value of concluding that in 2025 the optimal world population is 500 million?
If that was somehow true, what would be the future in 2100 or some other future date?
You have to go back to about 1700 for a world population of 500 million. virtually empty the place.
With current resources and energy consuption, with global warming and failed Green shift model don't see any realistic survival plan for civilization (I asked many times with different options). Unless unpopular action (witch can't be implemented, coz our political system (Paris treatie, as example) . Even (if be) fusion reactors dont save us (costly, low speed construction). Collapse - mass hunger, fall of governments, depopulation, deeducation. So, I asked from another pov - how many humans need, to stay alive as civilization.
 
and it has no value once you are knowledgeable on a topic, because now you yourself can produce the text without the errors in the first place.

But being able to produce the text does not mean the text just appears, you still need to spend time to produce it. It may be faster to let the LLM generate the text (using your knowledge as input) and then proofread and correct it. Especially, if you are bad at writing and/or typing.

Of course, this often applies to situations where it is questionable whether the text is necessary at all (or whether anyone will ever really read it), but most of the time it is less effort to just provide the text instead of fighting the expectations of society.
 
Exposing the Unseen: Mapping MCP Servers Across the Internet

Knostic’s research team conducted a systematic study to locate exposed MCP servers on the internet. Leveraging Shodan and custom Python tools, we fingerprinted and mapped production MCP servers. All servers we discovered were insecure and revealed their capabilities to anyone asking.

In this series of posts, we are sharing our findings, along with a guide detailing how we fingerprinted MCP servers.

We identified a total of 1,862 MCP servers exposed to the internet. From this set, we manually verified a sample of 119. All 119 servers granted access to internal tool listings without authentication.

While only servers exposed to the internet were tested, and many others may be running on private networks, the number we identified is relatively low, suggesting that adoption still has a long way to go.

None of the servers were secure, and many were unstable when we connected to them and exhibited various bugs. This further indicates the relatively low maturity of the technology and its current stage of adoption.

From this, we can conclude that while the technology is being actively explored and adopted, it remains in the early stages of the adoption curve.

The systems’ low stability and lack of security are significant concerns. It suggests that, as with previous technologies, security may only be actively introduced after widespread exploitation has already occurred.

Model Context Protocol (MCP)

The Model Context Protocol (MCP) is an open standard, open-source framework introduced by Anthropic in November 2024 to standardize the way AI systems like LLMs integrate and share data with external tools, systems, and data sources.
 
Back
Top Bottom