Literarisches & Journalistisches

10 Regeln für die Digitale Welt

In der Überzeugung, dass - die Anerkennung der Würde und des Wertes der menschlichen Person, ihrer kreativen und ethischen Potentiale und ihrer Gabe, die Zukunft schöpferisch zu gestalten, die Grundlage des gerechten, friedlichen und demokratischen Zusammenlebens freier Subjekte ist; - die Nichtanerkennung dieser Werte und Potentiale zugunsten eines blinden Fortschrittsglaubens und eines deterministischen Weltbildes eine Haltung des Fatalismus und der Resignation befördert, die die Zukunft unseres Planeten sowie das zivilisierte Zusammenleben zwischen Menschen gefährdet; - wir alle Verantwortung für das gute Leben tragen und über die Gabe verfügen, uns darüber in Rede, Gewissens- und Glaubensfreiheit offen zu verständigen; - moderne Technologien uns viele geeignete Mittel an die Hand geben, dies auf kluge und gerechte Weise zu tun;

Laplacescher Dämon

Laplacescher Dämon

15. Dezember, 2025

Der Laplacesche Dämon ist die Veranschaulichung der erkenntnis- und wissenschaftstheoretischen Auffassung, nach der es im Sinne der Vorstellung eines geschlossenen mathematischen Weltgleichungssystems möglich ist, unter der Kenntnis sämtlicher Naturgesetze und aller Initialbedingungen wie Lage, Position und Geschwindigkeit aller im Kosmos vorhandenen physikalischen Teilchen jeden vergangenen und jeden zukünftigen Zustand zu berechnen und zu determinieren. Nach dieser Aussage wäre es theoretisch möglich, eine Weltformel aufzustellen.

AI Factories

AI Factories

30. Oktober, 2025

AI Factories leverage the supercomputing capacity of the EuroHPC Joint Undertaking to develop trustworthy cutting-edge generative AI models.

Global Call for AI Red Lines

AI holds immense potential to advance human wellbeing, yet its current trajectory presents unprecedented dangers. AI could soon far surpass human capabilities and escalate risks such as engineered pandemics, widespread disinformation, large-scale manipulation of individuals including children, national and international security concerns, mass unemployment, and systematic human rights violations.

The ‘godfather of AI’ reveals the only way humanity can survive superintelligent AI | CNN Business

Las Vegas — Geoffrey Hinton, known as the “godfather of AI,” fears the technology he helped build could wipe out humanity — and “tech bros” are taking the wrong approach to stop it.

Pronatalisten in den USA: So viele Kinder wie möglich

Simone und Malcolm Collins wollen so viele Kinder in die Welt setzen, wie sie können. Sie sind „Pronatalisten“, so wie Elon Musk. Ein Besuch

Frontier Models are Capable of In-context Scheming

Frontier models are increasingly trained and deployed as autonomous agent. One safety concern is that AI agents might covertly pursue misaligned goals, hiding their true capabilities and objectives - also known as scheming. We study whether models have the capability to scheme in pursuit of a goal that we provide in-context and instruct the model to strongly follow. We evaluate frontier models on a suite of six agentic evaluations where models are instructed to pursue goals and are placed in environments that incentivize scheming. Our results show that o1, Claude 3.5 Sonnet, Claude 3 Opus, Gemini 1.5 Pro, and Llama 3.1 405B all demonstrate in-context scheming capabilities. They recognize scheming as a viable strategy and readily engage in such behavior. For example, models strategically introduce subtle mistakes into their responses, attempt to disable their oversight mechanisms, and even exfiltrate what they believe to be their model weights to external servers. Additionally, this deceptive behavior proves persistent. When o1 has engaged in scheming, it maintains its deception in over 85% of follow-up questions and often remains deceptive in multi-turn interrogations. Analysis of the models' chains-of-thought reveals that models explicitly reason about these deceptive strategies, providing evidence that the scheming behavior is not accidental. Surprisingly, we also find rare instances where models engage in scheming when only given a goal, without being strongly nudged to pursue it. We observe cases where Claude 3.5 Sonnet strategically underperforms in evaluations in pursuit of being helpful, a goal that was acquired during training rather than in-context. Our findings demonstrate that frontier models now possess capabilities for basic in-context scheming, making the potential of AI agents to engage in scheming behavior a concrete rather than theoretical concern.

How Progress Ends: Technology, Innovation, and the Fate of Nations.

In How Progress Ends, Carl Benedikt Frey challenges the conventional belief that economic and technological progress is inevitable. For most of human history, stagnation was the norm, and even today progress and prosperity in the world’s largest, most advanced economies—the United States and China—have fallen short of expectations. To appreciate why we cannot depend on any AI-fueled great leap forward, Frey offers a remarkable and fascinating journey across the globe, spanning the past 1,000 years, to explain why some societies flourish and others fail in the wake of rapid technological change.

Die Stunde der Raubtiere

Macht und Gewalt der neuen Fürsten. 2026. 978-3-406-83821-7. Der SPIEGEL-Bestsellerautor Giuliano da Empoli unternimmt in seinem neuen Buch eine genauso fesselnd…

Dario Amodei — Machines of Loving Grace

I think and talk a lot about the risks of powerful AI. The company I’m the CEO of, Anthropic, does a lot of research on how to reduce these risks. Because of this, people sometimes draw the conclusion that I’m a pessimist or “doomer” who thinks AI will be mostly bad or dangerous. I don’t think that at all. In fact, one of my main reasons for focusing on risks is that they’re the only thing standing between us and what I see as a fundamentally positive future. I think that most people are underestimating just how radical the upside of AI could be, just as I think most people are underestimating how bad the risks could be.

AI Snake Oil

Confused about AI and worried about what it means for your future and the future of the world? You’re not alone. AI is everywhere—and few things are surrounded by so much hype, misinformation, and misunderstanding. In AI Snake Oil, computer scientists Arvind Narayanan and Sayash Kapoor cut through the confusion to give you an essential understanding of how AI works and why it often doesn’t, where it might be useful or harmful, and when you should suspect that companies are using AI hype to sell AI snake oil—products that don’t work, and probably never will.

The Techno-Optimist Manifesto | Andreessen Horowitz

We are told that technology is on the brink of ruining everything. But we are being lied to, and the truth is so much better. Marc Andreessen presents his techno-optimist vision for the future.

Facing reality? - Publications Office of the EU

This report presents the first published analysis of the Europol Innovation Lab’s Observatory function, focusing on deepfakes, the technology behind them and their potential impact on law enforcement and EU citizens. Deepfake technology uses Artificial Intelligence to audio and audio-visual content. Deepfake technology can produce content that convincingly shows people saying or doing things they never did, or create personas that never existed in the first place. To date, the Europol Innovation Lab has organised three strategic foresight activities with EU Member State law enforcement agencies and other experts. During strategic foresight activities conducted by the Europol Innovation Lab, over 80 law enforcement experts identified and analysed the trends and technologies they believed would impact their work until 2030. These sessions showed that one of the most worrying technological trends is the evolution and detection of deepfakes, as well as the need to address disinformation more generally. The findings in this report are the result of extensive desk research supported by research provided by partner organisations, expert consultation, and the strategic foresight activities. Those workshops provided the initial input for this report. Furthermore, the findings are the result of extensive desk research supported by research provided by partner organisations, expert consultation and the strategic foresight activities conducted by the Europol Innovation Lab. Strategic foresight and scenario methods offer a way to understand and prepare for the potential impact of new technologies on law enforcement. The Europol Innovation Lab’s Observatory function monitors technological developments that are relevant for law enforcement and reports on the risks, threats and opportunities of these emerging technologies.

Akzelerationismus Teil 2: /acc - Das Kapital ist eine K.I.

Einführung in die Philosophie der Denkschule des Akzelerationismus /acc - Das Kapital ist eine K.I. Von Karl Marx bis Nick Land.

Human Compatible: Artificial Intelligence and the Problem of Control

In the popular imagination, superhuman artificial intelligence is an approaching tidal wave that threatens not just jobs and human relationships, but civilization itself. Conflict between humans and machines is seen as inevitable and its outcome all too predictable.

Musk und Putin warnen im Einklang vor KI-Gefahren

Der Techvisionär warnt die Welt erneut vor der Übermacht der Maschinen. Der Dritte Weltkrieg drohe nicht durch Nordkorea, sondern KI.

To Save Everything, Click Here

In the very near future, “smart” technologies and “big data” will allow us to make large-scale and sophisticated interventions in politics, culture, and everyday life. Technology will allow us to solve problems in highly original ways and create new incentives to get more people to do the right thing. But how will such “solutionism” affect our society, once deeply political, moral, and irresolvable dilemmas are recast as uncontroversial and easily manageable matters of technological efficiency? What if some such problems are simply vices in disguise? What if some friction in communication is productive and some hypocrisy in politics necessary? The temptation of the digital age is to fix everything — from crime to corruption to pollution to obesity — by digitally quantifying, tracking, or gamifying behavior. But when we change the motivations for our moral, ethical, and civic behavior we may also change the very nature of that behavior. Technology, Evgeny Morozov proposes, can be a force for improvement — but only if we keep solutionism in check and learn to appreciate the imperfections of liberal democracy. Some of those imperfections are not accidental but by design.

The Inevitable: Understanding the 12 Technological Forces That Will Shape Our Future

A New York Times BestsellerFrom one of our leading technology thinkers and writers, a guide through the twelve technological imperatives that will shape the next thirty years and transform our lives Much of what will happen in the next thirty years is inevitable, driven by technological trends that are already in motion. In this fascinating, provocative new book, Kevin Kelly provides an optimistic road map for the future, showing how the coming changes in our livesfrom virtual reality in the home to an on-demand economy to artificial intelligence embedded in everything we manufacturecan be understood as the result of a few long-term, accelerating forces. Kelly both describes these deep trendsinteracting, cognifying, flowing, screening, accessing, sharing, filtering, remixing, tracking, and questioningand demonstrates how they overlap and are codependent on one another. These larger forces will completely revolutionize the way we buy, work, learn, and communicate with each other. By understanding and embracing them, says Kelly, it will be easier for us to remain on top of the coming wave of changes and to arrange our day-to-day relationships with technology in ways that bring forth maximum benefits. Kellys bright, hopeful book will be indispensable to anyone who seeks guidance on where their business, industry, or life is headingwhat to invent, where to work, in what to invest, how to better reach customers, and what to begin to put into placeas this new world emerges.

Superintelligence: Paths, Dangers, Strategies

This seminal book injects the topic of superintelligence into the academic and popular mainstream. What happens when machines surpass humans in general intelligence? Will artificial agents save or destroy us? In a tour de force of analytic thinking, Bostrom lays a foundation for understanding the future of humanity and intelligent life. The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. Other animals have stronger muscles or sharper claws, but we have cleverer brains.

The Dark Enlightenment, by Nick Land

Enlightenment is not only a state, but an event, and a process. As the designation for an historical episode, concentrated in northern Europe during the 18th century, it is a leading candidate for the ‘true name’ of modernity, capturing its origin and essence (‘Renaissance’ and ‘Industrial Revolution’ are others). Between ‘enlightenment’ and ‘progressive enlightenment’ there is only an elusive difference, because illumination takes time – and feeds on itself, because enlightenment is self-confirming, its revelations ‘self-evident’, and because a retrograde, or reactionary, ‘dark enlightenment’ amounts almost to intrinsic contradiction. To become enlightened, in this historical sense, is to recognize, and then to pursue, a guiding light.

The Education of a Libertarian

I remain committed to the faith of my teenage years: to authentic human freedom as a precondition for the highest good. I stand against confiscatory taxes, totalitarian collectives, and the ideology of the inevitability of the death of every individual. For all these reasons, I still call myself “libertarian.”

Interview mit Max More

Von Denkverstärkung und Bewußtseinserweiterung zu supermenschlicher Augmentierung: Max More, Gründer der Exptropianerbewegung und Zukunftsphilosoph, spricht über unsere biologische Fesselung, die befreiende Macht smarter Drogen, die Notwendigkeit einer neuen Aufklärung und die Meme der heraufziehenden Singularität.

The Singularity Is Nearer

The noted inventor and futurist s successor to his landmark book The Singularity Is Near explores how technology will transform the human race in the decades to come Since it was first published in 2005, Ray Kurzweil s The Singularity Is Near and its vision of an exponential future have spawned a worldwide movement. Kurzweil's predictions about technological advancements have largely come true, with concepts like AI, intelligent machines, and biotechnology now widely familiar to the public.

Do transhumanists advocate eugenics? In: Transhumanist FAQ

Eugenics in the narrow sense refers to the pre-WWII movement in Europe and the United States to involuntarily sterilize the “genetically unfit” and encourage breeding of the genetically advantaged. These ideas are entirely contrary to the tolerant humanistic and scientific tenets of transhumanism. In addition to condemning the coercion involved in such policies, transhumanists strongly reject the racialist and classist assumptions on which they were based, along with the notion that eugenic improvements could be accomplished in a practically meaningful timeframe through selective human breeding.