Since HAL 9000 first said “Sorry, Dave” in Arthur C. Clarke’s classic sci-fi novel 2001: A Space Odyssey, the fear of super-intelligent machines turning on their human creators has captivated audiences, novelists and Hollywood alike.
However, according to Nicole Kobie at PC Pro, with recent advances in artificial intelligence and the large growth of processor power available in the cloud, real-life scientists have begun turning their attention to controlling a real-life HAL:
Technology has long been a source of danger in the fertile imaginations of sci-fi novelists, but the idea is gaining academic support, with researchers at the University of Oxford’s Future of Humanity Institute (FHI) joining those from the newly launched Centre for the Study of Existential Risk (CSER) at the University of Cambridge to look more widely at the possible repercussions of nanotechnology, robotics, artificial intelligence and other innovations.
The problem of containing a super-intelligent artificial intelligence (AI) gone haywire has attracted some of the world’s brightest tech brains:
This idea has been studied since 2005 by the FHI, which was last year joined by the CSER, founded by Huw Price, Bertrand Russell professor of philosophy at the University of Cambridge, astronomer royal Lord Martin Rees, and Jaan Tallinn, co-founder of Skype.
The institute was sparked in part by a conversation between Price and Tallinn, during which the latter wondered, “in his pessimistic moments”, if he’s “more likely to die from an AI accident than from cancer or heart disease”.
Today, the idea of malevolent computers being more dangerous than the world’s deadliest diseases might seem like a paranoid sci-fi fantasy. However, some scientists believe we are rapidly approaching a point at which we develop machines that are powerful enough to devise and design even more powerful devices without human intervention – with potentially unpredictable results:
At the core of this is an idea commonly referred to as “singularity” – the point at which technology can start to make its own technology and become more advanced than us, making it impossible to predict what comes next.
A post-singularity world might be more difficult to imagine than we think. As FHI researcher Nick Bostrom points out, malevolent super-intelligent machines are unlikely to think or act like us:
Our intuitions have been shaped by [science fiction], where machines are anthromorphised, and they’re really just like human supervillains,” he says. “That makes it harder to think about this in a clear way.”
As he explains to Aeon, technology such as artificial intelligence is best thought of as a “primordial force of nature, like a star system or a hurricane – something strong but indifferent”.
It seems dealing with a real life HAL could be far more difficult than we initially anticipated:
A patently difficult problem
There have been calls recently to reform the US patent system, with many saying it has failed to keep pace with online and smartphone innovation.
There are the smartphone patent wars, where tech giants such as Apple, Samsung, Google and Microsoft are attempting to use patents infringement lawsuits to outlaw the sale of their competitors’ products. In the most notorious case, Apple was awarded initially awarded $US1 billion in damages in a lawsuit against Samsung in August of last year.
Then there are the “patent trolls” – businesses with no real assets or products which purchase patent portfolios in order to file lawsuits in the hope of securing royalties from companies that produce products.
According to Mark Bohannon of OpenSource.com, the issue of patent reform has attracted the attention of the highest office in US politics:
Earlier this month, the White House reiterated its concerns that there has been “an explosion of abusive patent litigation designed not to reward innovation and enforce intellectual property, but to threaten companies in order to extract settlements based on questionable claims.” It released a report, Patent Assertion and U.S. Innovation, detailing the challenges that PAEs and a broken patent system pose to innovators. The White House also announced a set of legislative priorities and executive branch actions on this front.
In the article, Bohannon examines four potential pieces of legislation dealing with the issue and compares how well their provisions address some of the key problems businesses have identified. One such area is the asymmetry of costs and risks between companies filing patent lawsuit and those defending them:
The asymmetrical costs and risks of litigation are disproportionately and unfairly borne by defendants targeted by [Patent Assertion Entities or] PAEs, who, unlike PAEs, are creating jobs, engaging in innovation, and contributing to the economy.
Despite the important steps taken in recent legislation, however, there is still some way to go:
A number of key issues “left on the cutting room floor” during consideration of the AIA—including the current unreliable, uncertain, and speculative method of calculating damages, correcting the standard for finding willful infringement, and venue—remain important elements of our broken patent system that play to the hands of PAEs and encourage abusive patent litigation.
Admittedly, the subject matter is a little dry. However, if your business is in science or technology, patents have the potential of severely impacting your business. And given Australian intellectual property law tends to follow the US, it’s worth reading Bohannon’s piece to get up to speed.
The security risks of HTML 5
HTML5 is shaping up to be the next big thing in web and mobile development. It’s a technology that also forms the basis of Mozilla’s forthcoming Firefox OS smartphone platform.
This new technology will help developers create platform independent websites that behave like apps. But as Ericka Chickowski of Dark Reading asks, just how secure is it?
Designed to help developers more closely mimic native application through browser-based apps, HTML5 includes a number of useful features that pose as double-edged swords from a security perspective.
In traditional HTML websites, most of the data viewed in web browsers is stored and processed at the server end, with only a minimal amount of data stored on an end user’s computers. This fundamentally changes with HTML5:
Local storage is a big change from HTML of the past, where browsers could only use cookies to store small bits of information, such as session tokens, for managing identity.
“HTML5 changes this with sessionStorage, localStorage, and client-side databases to allow developers to store vast amounts of data in the browser that is all accessible from JavaScript,” says Dan Kuykendall, CTO of Web application firm NTO OBECTive, who explains that while this provides the opportunity for feature-rich applications and greater offline capabilities, it also opens up a new field of opportunity to attackers.
As Chickowski argues, this and other changes in HTML5 will require web developers to fundamentally rethink how they go about their work, from a security perspective:
Developers have to design with the dangers in mind and weigh that against the type and sensitivity of data stored in the client. At the moment, many development shops are not training their staff to do that.
As HTML5 gains traction, it’s critical to be aware of the risks it poses as well as its benefits.
COMMENTS
SmartCompany is committed to hosting lively discussions. Help us keep the conversation useful, interesting and welcoming. We aim to publish comments quickly in the interest of promoting robust conversation, but we’re a small team and we deploy filters to protect against legal risk. Occasionally your comment may be held up while it is being reviewed, but we’re working as fast as we can to keep the conversation rolling.
The SmartCompany comment section is members-only content. Please subscribe to leave a comment.
The SmartCompany comment section is members-only content. Please login to leave a comment.