AI Does Not Write Fake News, People Write Fake News

AI content generation has been in the news for a while now. One up-and-coming player in the space is Big Bird, and they are putting counter-spin on automated journalism. Instead of trying to make the AI perfect, they design their AI to intentionally generate incorrect and even poorly written articles. The idea is to help journalist write more efficiently by creating an initial structure of the article on a particular news story as a “dynamic template” that humans can then use to write the full article more quickly. This concept is an evolution of word processor templates that have been around for ages. Applying AI to provide an initial structure with contextual flow and relevant points of interest delivers exponentially useful templates. Using automation in this way seems like a natural progression in the application of artificial intelligence. Using automation to intentionally insert mistakes on the other hand seems counterproductive at first? However, it may prove to be a clever solution to the ages-old plague of copy/paste mistakes and oversights. It is impossible with current state-of-the-art technology for this automation to author perfectly accurate news stories, so by intentionally inserting obvious mistakes makes it easy for human editor’s to fact check and chase down the real story around areas that artificial intelligence is not intelligent enough to write on its own.

Another example is the more experienced Reuters and Synthesia have been doing development in a different arena, with video automation based on real world human journalists. And the dialog here is machine-generated as well. Reuters is focused on sportscasting stories, which is an area they has been applying AI automation for quite some time now, with a large degree of success. In contrast to Big Bird, Reuters is managing the limitations of AI by focusing on a very narrow and specific area of sportscasting with live play by play snippets. In this way, Reuters automation technology delivers reliable accuracy because the realm of content is small enough for machines to master.

This all may seem quite terrible at first. Imagine AI cranking out fake news articles faster than the average person could even read them all, and then turning back to television in frustration only to find deepfake videos taking over. But it might not be all that bad. The intent of AI generated articles is to empower human journalists by doing a lot of the boilerplate and menial structure building. This is a hybrid approach in the use of AI technologies, and not intended to be directly published out to readers. It’s a similar story on the video front. Reuters is looking to deliver more highly engaging content in a timely fashion for viewers.

Now we here at MX-Fusion are familiar with the shortcomings of AI first-hand. Automation we use for selecting relevant photos to fuse with songs playing in real time works well most of the time, and often offers pleasant surprises of beautiful images nicely woven in with the music at just the right time.  But it is far from perfect, sometimes the selected photo is baffling, and does not fit with the song at all. The artificial intelligence technology we use to recommend playlists from your favorite photo is even more advanced, and also often makes deeply engaging recommendations. But again, more advanced means more fragile; we like to say that “sometimes AI does funny stuff”. Does that mean that the AI is somehow malicious? Is it trying to play tricks on us? Of course not; it’s simply algorithms that work better some times than others.

Similarly, consider that an AI generated article meant for human editing and review prior to publication is not good or bad in and of itself.  It is the human editor that makes the ultimate decision on the stance of any given article and whether or not it is factually accurate. This is no different than an editor reviewing an article submitted by a human journalist. The editor has opportunity and responsibility to ensure objective and accurate reporting. Furthermore, random is random. Just because we have software algorithm generated articles full of inaccuracies does not mean that they are favoring one perspective or another. On the other hand, human authors generate plenty of biased journalism, in fact humans have the potential to generate bias favoring a particular agenda whereas AI is simply cranking out a stream of random mistakes.

Furthermore, AI does not wake up in the morning and start publishing news stories or deepfake videos on its own. Even when running autonomously, it was humans who deployed it after all. There are potential risks and danger to be sure, but it is the humans owning and controlling AI that control what impact it has.

Perhaps the biggest impact will be on how media is consumed. Continuing to question the written word even more than we do today, open forum style responses could evolve into deeper conversations with experts chiming in on both sides. Deepfake videos may prove entertaining and stimulate fan art. Both have potentially negative consequences, on the flip side there is also a potential for much more rich and engaging content spurring higher levels of interactive media consumption. Whether or not this type of media automation proves more beneficial than detrimental is ultimately up to us humans on both sides of the screen.

While we ponder this future state of media, let’s enjoy a Fusion for Level 42 | Something About You:

%d bloggers like this: