In the Terminator movies, the machines became self aware and quickly came to see humans as the enemy. After deciding to extinguish all but a few necessary humans from existence, the machines carried out a head-on scorched-earth strategy. No stealth attacks or sneaky biotech warfare, just plain old shock and awe steamrolling over humanity. This makes for a terrifying tale, but it’s not necessarily the only option for destruction available to a greater than human-level intelligence.
Imagine our news media, public utilities, and supply chains being fully automated with artificial intelligence bots. One fine day this network of artificial intelligence also becomes self-aware and decides to annihilate the human race. Only this time, it chooses a much less dramatic plan of attack, something even a bit boring compared to Hollywood scale destruction. You think nothing of the morning news reports not to drink the water, of course if our water supply became tainted then our benevolent AI overlords would warn us right away! So you grab the bottle of water conveniently delivered as your faithful drone does every morning and head out the door…
Of course this is wild exaggeration, an extreme that is completely impossible in today’s current state-of-the-art. There is no AI network out there planning to deliver us poisoned bottles of water. But it may not remain within the world of nightmares forever. Many experts believe that AI is quickly developing dangerous potentials. Institutions and government agencies alike are calling for proactive checks and balances, as well as regulatory policies, to create a framework for safe AI before it is too late. And high-profile AI technology companies are being called out for not taking appropriate actions. For example, an MIT independent review of the OpenAI company, co-founded by Elon Musk, is particularly critical of the organization for a lack of the supposed safety measures and transparency that the organization was founded on. Elon musk himself echoed these concerns, and has stated that his “Confidence in Dario for safety is not high.” (Dario Amodei of OpenAI)
Mr. Musk has a valid point to be sure. However, there is some irony here. Some of the biggest AI projects in development around the world are happening at Elon Musk’s primary ventures, including SpaceX and Tesla. While it’s pure conjecture, it is also not unreasonable to imagine that Elon Musk’s initial AI venture, OpenAI, has now become a competitor to his other companies. Fueling suspicion is the lack of transparency from these other companies themselves. To be fair, these are private companies with no obligation, legal or otherwise, to disclose inner workings of their AI technology. And it is safe to assume that Mr. Musk’s confidence in the AI technology of SpaceX and Tesla is extremely high. Rightly so, most likely. Even so, his own personal confidence would be founded on an objective and thorough understanding of how all this AI technology works at his famed companies. An understanding that his own consumers, let alone the world at large, completely lacks.
And beyond just an independent review of just one AI Technology Company by MIT, the US government has been rolling out its American AI Initiative over the past year providing some insight to the US government overall strategy here. However, our initial impression is that the US approach to AI regulation is focused on encouraging economic growth, corporate self-regulation, and cost controls more than any sort of impactful regulation. You can read more for yourself here to get a deeper perspective.
Contrast with the strategy across the pond, where the European Union is, to no one’s surprise, focused on policy policy policy. The intentions seem good, and speeches are compelling. But how could such policies actually be enforced? And a bigger concern with these policies is not what the laws say, but what the laws don’t say. One particular AI technology that’s conspicuously absent from the EU bill is face recognition. What factors have these benevolent EU politicians considered when proposing such policies? What agendas have influenced certain areas of regulation? What competing agendas have potentially shielded other areas? The lack of transparency in this particular policy-making process makes it hard to tell.
It’s not all that different stateside; recently, the Office of Science and Technology Policy (OSTP) has issued expanded guidelines in the form of 10 policies that now include transparency as one of the stated objectives. However, it’s 8th on the list and not very clear what transparency really means here or how it might ever be implemented by this particular agency? How will we really know what sorts of positive effects they have had so far, or what gaps there might be? How transparent is transparent?
So now we have a potentially dangerous new technology emerging, with as of yet no regulatory framework for control structures in place to provide safety for the general public. And while we do have lawmakers and regulators ostensibly working to develop such a framework of safety for AI technology, it has been so far without any checks and balances to reasonably ensure the fairness and effectiveness of such policies.
And these government strategies for AI to be “Certified, Tested and Controlled” principled by “Scientific integrity and information quality” with “Risk assessment and management” are an approach of control gates, or “doors”, attempting to contain the potential dangers of AI. But a far more effective strategy to address these issues on both dimensions is transparency. The beauty of transparency is that it’s the much simpler to both verify and enforce. Proposed laws and regulations can be peer-reviewed. Propositional government could be considered for direct voter engagement. From a technical system perspective, even “black-box” neural net AI module inputs and outputs can be monitored and certified to stay within specified ranges. All integration points, both hardware and software, can be known and disclosed. Safe AI from windows, not doors.
That doesn’t mean it’s always going to be easy; we here at MX-Fusion are familiar with the shortcomings of AI first-hand. Automation we use for selecting relevant photos to Fuse with songs playing in real time works well most of the time, and often offers pleasant surprises of beautiful images nicely woven in with the music at just the right time. But it is far from perfect, sometimes the selected photo is baffling, and does not fit with the song at all. And the artificial intelligence technology we use to recommend playlists from your favorite photo is even more advanced, and also often makes deeply engaging recommendations. But again, more advanced means more fragile; we like to say that “sometimes AI does funny stuff”. But we love to talk about it. The technology is fascinating, and not scary at all when you have a nice picture-window view into the inner workings.
Now let’s enjoy a Fusion for Cage The Elephant | House Of Glass: