Quantcast
Channel: Press to Digitate
Viewing all articles
Browse latest Browse all 44

Cybersecurity vs. Human Extinction

$
0
0

The recent White House Cyberspace Policy Review which charts a path for the Administration's high level cybersecurity initiative, incorporated input from all the normal constituencies and stakeholders in the conventional interpretation of the issue.

Unfortunately, the most serious and probable cybersecurity catastrophes fall outside the range of the normal and conventionally expected threats. It is their very 'bizarreness' which keeps them from being taken seriously, no matter how much evidence accumulates that they are on the verge of becoming inevitable.

The following is an email message which this writer has sent through Whitehouse.gov to suggest the Administration take cognizance of such issues. Though it will probably get lost in the avalanche of incoming White House email, perhaps public discussion may raise the profile of these issues while there is still time to debate them. (more...)

ATTN: Melissa Hathaway, Cybersecurity Chief at the National Security Council

TO MS. HATHAWAY:

The Cyberspace Policy Review neglects to consider the range of threats from the impending instantiation of Artificial [General] Intelligence ("AGI"), and the emerging proliferation of practical Brain/Computer Interface ("BCI") technology in mass-market consumer electronics devices [such as the $300 Emotiv EPOC].

These developments, which are no longer speculative, and not merely inevitable but which are actually already in the process of happening in realtime, represent far greater danger to human life than the sum total of all other cybersecurity threats, combined.

The rapid - and exponentially accelerating - development of AGI and BCI technologies will lead to a cognitive convergence of Man and Machine, probably before the end of this Administration's term in office.  Technologies which enable full-scale human brain emulation in Silicon, which, in 2005 were predicted by Kurzweil to evolve by 2048, were believed by technology insiders - as of last October - as likely to become available by 2018, and perhaps even sooner.  

Meanwhile, this year, IBM announced that it has broken the Petaflop barrier, and, within weeks, a second high performance computing initiative demonstrated performance in excess of 1.7 pflops.  This places extant hardware at within a single Moore Doubling of 3.5 pflops, which is the assumed processing power required for realtime emulation of the complete human brain.  

At the same time, the U.S. Army is currently testing an "artificial soldier" AGI avatar online, within the 'World of Warcraft' MMORPG virtual reality environment, to determine whether the real human players on the Internet can detect the avatar as a computer generated intelligence, rather than another real human player.  This is a legitimate and practical Turing Test, which, if passed by the Army's avatar, will mean that we have already arrived at human scale AGI.  It seems unlikely that the Army would publicly announce that it was conducting such an experiment, if it were not highly confident that its AI package would prove successful at the endeavor.

The consequences of greater-than-human scale AGI have been under consideration within the cybernetics research community since the landmark 1993 presentation to NASA by mathematician Vernor Vinge, which first characterized the technological Singularity.  Such consequences are believed by many knowledgable cyberneticists directly involved in leading edge AGI research to include a definite possibility of human extinction.

The non-zero (and increasing) probability of superhuman AGI entering an uncontrolled - and, likely, uncontrollable - state, coupled with the near certain development of full-duplex, high resolution consumer level BCI peripherals within two or three Moore Doublings creates dangers hitherto discussed only in science fiction, but which now presents itself for urgent and serious policy consideration.

Ironically, the government's own assets are driving this convergence, through cybernetics and neuroscience research being conducted by DARPA, IARPA, NSF, DoE, U.S. Army, NASA and the NIH.  The NSC should empower an independent comprehensive analysis of these program activities, should freeze such research until it can be assessed in an independent and integrated manner, and should incorporate AGI and BCI threat conditions within the scope of cybersecurity issues under the purview of the new office being created.

Since not all AGI/BCI research is being conducted in the U.S., cybersecurity threat planning must include the possibility that a "Strong AI" machine intelligence may instantiate within research or public infrastructure not under direct, or even indirect, control or influence by the U.S. federal government.  Should this occur, the potential for its uncontrolled propagation past the point of any possible human control or management must be considered as 'High'.  This would pose an immediate, and, likely, unrecoverable national security threat to the United States of the gravest nature, and, therefore must be dealt with in advance, rather than after the fact, at which point no effectual response may be possible.

In addition to the aforementioned freeze and independent review of the USG's own contribution toward this threat, it is proposed that a number of universities [which are NOT currently engaged in federally sponsored cybernetics or neuroscience research] be contracted at a significant level to independently discover and characterize all presently unknown aspects of the AGI/BCI threat environment. Implications include potential impacts on Public Health, Continuity of Government, Nuclear Security, Aerospace Defense, Biometric Access Control, and myriad issues not yet imagined.  

Most important: The FDA should issue an advance directive, prohibiting the use of 'full-duplex' BCI under any circumstances, until such analyses can be performed and evaluated.

************************************ End of Msg

The real cybersecurity threat isnt hackers from the Chinese People's Liberation Army burrowing into Lockheed, or al-Queda in our utilities grid, or some creep phishing for social security records or credit card numbers.  The real threat is that the leading edge in cybernetics will soon get away from us. We'll be lucky if we even recognize it when it happens.  If we wait until then, it will be too late to do anything meaningful about it.

Last month's box office clash of "Star Trek" and "Terminator Salvation" is an ironic metaphor for the two radically different and definitely competing futures we must now decide between. If our society fails to actively make the decision, the wrong future will certainly come about, by default. Even now, it is questionable that we are in time to affect the outcome, no matter what government, industry, and academia may cooperate to do on the subject.

The sheer inevitability of the AGI/BCI convergence - not "if", but "when" - should put it at the top of the cybersecurity agenda, relative to all other threats which are merely at some level of probability. Whether the Singularity happens in four years or 40 years, the very essence of 'what it means to be human', and the continued survival of the species itself are at stake.


Viewing all articles
Browse latest Browse all 44

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>