Interview

Translating formal explanations into natural language

Prof. Birte Glimm is a computer scientist and Assistant Director of the Institute for Artificial Intelligence at the University of Ulm. Together with her research group, she develops and optimizes algorithms for automatically-derived conclusions and analyzes their complexity. The computer scientist is research fellow at fortiss will be engaged in the flagship project Robust AI and will collaborate with the Robotics and Machine Learning, Business Model and Service Engineering and Model-based Systems Engineering research groups.


People exchange knowledge and information by communicating with others, whether in oral or written form. What method can technical systems use?

In the field of knowledge representation, we model the knowledge that the systems must have via the application domains. One example is the intelligent do-it-yourself assistant that we are jointly developing with an industry partner. We model material properties and tools, or actions that could be carried out with these materials and tools. If we were to transfer this concept to the German “Heimwerker” or “handyman” domain for instance, the system would conclude something like: “Spruce is a soft wood; when using screws with a diameter of less than 3 millimeters with softwood, pre-drilled holes are not required.”  This is a process by which the machine-usable knowledge is also expanded step-by-step. To some extent, pre-existing formalized knowledge can be used so that in theory, even machines can share knowledge.


That means we’re headed toward an environment in which just like humans, technical systems create rules based on a few observations and derive conclusions for all other possible situations?

The best known example of logical consequences is the syllogism. From the knowledge that “all humans are mortal” and “Socrates is a human”, you can conclude that “Socrates is mortal.” Both of these given facts are premises in this case. This syllogism can be formalized for instance as: “All x’s are y’s” and “z is an x”, therefore “z is y”. In this case x, y and z are the placeholders. With logic-based knowledge representation, it’s about establishing rules that allow valid conclusions to be derived in the most efficient manner possible.


You developed a tool for deriving conclusions together with other researchers. Why is it so important to automate the process?

The bottom line is, computers are faster at deriving conclusions. Large medical knowledge bases consist of several hundred thousands of axioms; in other words existing knowledge about illnesses and disease, medications or the human anatomy. Since this knowledge was created by multiple experts over a longer period of time, errors find their way into the knowledgebase from time to time. And these can be detected with automated processes. Humans can no longer maintain an overview of this wealth of knowledge in its entirety without the help of computers.


Humans recognize inconsistencies or deviations from the rules. For example, all birds can fly. Penguins can’t fly. A machine would conclude that the penguin is not a bird, even though it is zoologically classified as a bird. What concepts do scientists have in order to resolve these types of issues?

Luckily it’s not quite that bad in the field of logic-based knowledge representation. In this case the machine would figure out that a statement can’t be made regarding whether a penguin is a bird. This can be addressed with so-called default logic, in which you can stipulate that birds usually fly. If a machine is then instructed that Tweety is a bird and that birds usually fly, then the assumption that Tweety can fly is maintained, until information to the contrary is available. In this case: Tweety is a penguin, and penguins can’t fly.


At the moment, the problem is that decisions made by AI technologies are not transparent. When algorithms derive false conclusions, how do you substantiate that?

This flaw exists above all for machine learning approaches that are mainly black box approaches. Here the issue of transparency is clearly a current subject for research. With logic-based processes like what we are developing, the machine can provide a verification, or a formal explanation, of how it came to a particular conclusion. The challenge here then is figuring out how to present these formal explanations to the user in a suitable manner.


What are the available options?

We’re developing processes for translating formal explanations into natural languages. One of the challenges is that systems that generate automated explanations are optimized for performance, not for generating the shortest possible explanations. That’s why we developed an approach that is designed to create the shortest possible explanations. Together with other optimizations, such as the consolidation of verification steps, we can then create more concise explanations. The system-generated explanations are nonetheless still relatively long and in some instances repetitive. That’s why we are working on further strategies for compact, but yet informative representation. The approach also calls for taking into account previous user knowledge, because known aspects don’t have to be explained again, or explained with the same degree of detail.


What project are you interested in initiating with fortiss?

I would be pleased if we could work together on the challenge of “hybrid” Artificial Intelligence. In other words, the issue of how rules-based approaches can be successfully brought together with learning-based approaches.


What are the potential fields of application?

We just recently developed initial approaches for improving voice assistants. The recognition of human intentions is mostly learning-based for instance. Once a system has heard enough human statements to recognize what the intentions are behind them are, a learning-based system then makes generalizations. It can also take statements that it hasn’t yet heard and correlate them to an intention. But systems often have other information related to the context in which the user is currently in. I can use this knowledge for statements that the system is uncertain of and make them transparent.


Can you predict when such systems will be operational?

Even when we develop viable technologies at the university, we seldom follow them to the point at which they have practical application. I would be all the more pleased if I had the chance via fortiss to track these developments until they have utility.