Robot Journalism – What´s really possible and where are its limits?

Jul 25 2014

Robot Journalism – What´s really possible and where are its limits?

Recently the first super computer supposedly generated text that passed off as “human written”; and as such the discussion about artificial intelligence has made a riotous comeback. Scandinavian researcher, Christer Clerwall uncovered important information as a result of a study to evaluate automated generated texts - “Readers can’t distinguish between content written by a human and that which is automatically generated content”.

The fundamental robotic debate: What can people do that robots can’t?

Based on Clerwall’s findings, the writing software in question should have passed the “Turing Test”. British computer scientist Alan Turing developed a test in the 1950s, to determine whether a machine is capable of intelligence equal to that of a human being. The test consists of a guided keyboard and screen dialogue – the subject must decide whether the brain behind the answers is a machine or a person. We are living in the age of science fiction brimming with artificial intelligence and the on-going debate is becoming increasingly interesting: What are the limits of machines? Is there anything AI cannot do?

Semantic software produces decent texts – but it can’t replace journalists

Challenged by the new buzzword in the field “Robot journalism”, journalists are thrown into the debate of whether software can replace their work, consequently making them unnecessary. Since aexea developed AX Semantics (www.ax-semantics.com), which uses artificial intelligence algorithms and technology to write meaningful texts, we are being tossed into the middle of the robot journalism debates. However, our background in writing, linguistics and journalism gives us a fair say in the discussion as we have a clear understanding and well-informed opinions of how technology and journalists go together. Of course, we believe that our software is able to write good articles. And yes, we think a part of journalistic work can be taken over. But can it make journalists redundant? We don’t think so! Because there are crucial areas that a program simply can’t match.

Software is not investigative

The software doesn’t have a good journalistic “nose”. While you can certainly uncover inconsistencies via smart logics (e.g. if data contradicts itself), a software can’t draw any conclusion from the contradictions and thus is not able to write a meaningful article from contradicting data. The program first perceives the given data as “true” and does not check sources. Only if the data is very reliable and well maintained can the software write a high-quality text. A good journalist can develop a feel about the data provided and can assess the reliability of its sources. They also collect information that is not present in databases (e.g. speaking with stakeholders, interviewing experts, etc.).

The software lacks World knowledge

Robots can be fed copious amounts of data and can learn rules enabling them to figure out complex relationships within the data. The higher the quality and more comprehensive the system of rules is, the more meaningful and diverse the texts that emerge from the data will be. But what we humans could probably never teach software is “world knowledge”; knowledge about the environment and society, which people have acquired throughout the ages over many different channels and with different “methods”. World knowledge is a key factor in language: the meaning of many words can usually only be captured if one has knowledge of the world.

Robot – no talent for tragedy

So, the journalist understands many contexts that can’t be recognised or understood by software. The more general the subject and the less structured the available data is, the more inappropriate a technology becomes when writing a story. Generating a feature article about a theatre performance beyond common descriptive phrases can hardly be automated. Especially when involving emotions and feelings, programs are no match for the human ability (which most of us have). Austrian writer and journalist Peter Glaser has described the limitation of the algorithms as: What the machine lacks is the perception of tragedy. One can rationalise disasters, but not tragedy.

Category: Content Tagged: General Content News Studies & markets Technologies

Comments