“In a 2012 paper, the Russian transhumanist Alexey Turchin described what he called “global catastrophic risks of finding an extraterrestrial AI message” during the search for intelligent life. The scenario unfolds similarly to the plot of A for Andromeda. An alien civilization creates a signal beacon in space of clearly non-natural origin that draws our attention. A nearby radio transmitter sends a message containing instructions for how to build an impossibly advanced computer that could create an alien AI.
The result is a phishing attempt on a cosmic scale. Just like a malware attack that takes over a user’s computer, the advanced alien AI could quickly take over the Earth’s infrastructure — and us with it. (Others in the broader existential risk community have raised similar concerns that hostile aliens could target us with malicious information.)
What can we do to protect ourselves? Well, we could simply choose not to build the alien computer. But Turchin assumes that the message would also contain “bait” in the form of promises that the computer could, for example, solve our biggest existential challenges or provide unlimited power to those who control it.
Geopolitics would play a role as well. Just as international competition has led nations in the past to embrace dangerous technologies — like nuclear weapons — out of fear that their adversaries would do so first, the same could happen again in the event of a message from space. How confident would policymakers in Washington be that China would safely handle such a signal if it received one first — or vice versa?”