Politics

How smart is Berlin’s AI?

Artificial Intelligence has captured the attention of investors and developers worldwide. But what is AI actually used for in the German capital and how intelligent is it, really?

Image for How smart is Berlin’s AI?
Photo by Jacob Spetzler
Artificial Intelligence has captured the attention of investors and developers worldwide. But what is AI actually used for in the German capital and how intelligent is it, really?
Image for How smart is Berlin’s AI?
Ronny Vuine of Micropsi Industries says their AI for robotic arms is the world’s most advanced. Photo by Jacob Spetzler
AI is a buzzword, but what’s behind it? Hyped by its quaintly phantasmagoric and sexily anthropomorphic renditions on the big screen (Ex Machina, Her) or in the news – like social humanoid Sophia – artificial intelligence comes off as a wet sci-fi dream come true. But these fantasies of AI outsmarting mankind in a not-so-remote, often dystopian future hide the reality of what it mostly accounts for, both in research labs and in our everyday lives: from self-parking cars to smart home devices, from microsurgery to enhanced customer service, to those smartphone apps that filter your news articles and manage your playlists. All of this is “applied AI”, and is mostly based on ultra sophisticated data analytics. Berlin, as a tech-hub, has its fair share of AI projects, with over 30 companies in the game – from retail giant Zalando conducting market research to Berlin bank N26 using machine learning to detect fraud to the Bundeswehr’s new “Cyber Innovation Hub” in Moabit. Silicon Alley aspirants and start-ups from across the globe meet here at conferences and summits, and techies predict Berlin could overtake London as the new European AI capital. We spoke to some of Berlin’s top developers to find out just how close we are to robot revolution. Feeding artificial brains
I was researching cognitive systems and emotions, but nobody wants that – why would you want a car that gets angry at you?”
At the forefront of international robotics, Micropsi Industries create AI for industrial robots, enabling them to learn from humans. “Right now, our AI software for robotics is the most advanced in the world” says Ronny Vuine, who founded the company in 2014. Micropsi’s flagship invention is sold in the form of a little box which you hook up to a robot. Complete with a camera and a controller, it can control the movements made by the robot. “When we say we do AI coordination for robots, you imagine the Terminator, C-3PO, Bender or something,” Vuine laughs. But the industrial robots controlled by his company’s software do not look human at all: “All it really is, is a six or seven joint robotic arm.” The difference to old-school robots? They had to be programmed manually and could only be used for perfectly repetitive tasks – for example to weld car parts, provided they are always hanging at exactly the same height and moving at the same speed. Such pre-programmed robots would fail whenever there’s a slight variation. “You can’t just write a programme that knows where to go in free space,” Vuine tells us. But with Micropsi’s controller, instead of having to program the robot, you guide it through the movement, much like you would show someone a movement in golf or tennis. The robots learn how to paint, weld, and move things around autonomously. “We use a camera that gives our AI 50,000 numeric values so it can tell where, say, a wobbly cable is in space or where the target, for example, the plug, is. You show the correct movement with possible deviations, and the robot learns to decide autonomously how to move. It develops a sort of intuition for the correct movement.” The ability to make autonomous decisions is what general AI is about, and where Vuine’s initial interest lies. But for industry, AI has to remain predictable. “In my student days I researched cognitive systems and emotions, but nobody wants that – why would you want a car that gets angry at you? Ninety-five percent of AI research just isn’t useful for business. You have to dig deep into the maths and adapt it for a very specific environment.” Ironically, this is what happened to Micropsi’s initial research on emotions in AI: Hanson Robotics applied it in its famous Sophia, which claimed it experiences emotions such as anger and happiness. “It’s not a hoax,” Vuine says about Sophia, “but its capabilities are very limited. It follows scripts and knows how to react in certain, very limited ways to outside stimuli, but you can’t have a conversation with that thing, that’s for sure.” Talk to the bot Enabling machines to better understand and analyse language and communicate with humans is what the language technology laboratory at the German Research Centre for Artificial Intelligence (DFKI) does. Funded by both public shareholders like the German state and several universities, as well as private ones such as Google, SAP, Microsoft and Siemens, DFKI came to Berlin in 2007. Since then, if you’ve tried complaining about a delayed Deutsche Bahn train over email, chances are you’ve been interacting with one of their bots. Ten years ago, artificial intelligence was still a simple algorithm that replied to your email with a default block of text. “AI is still mostly complex data analytics,” explains Sven Schmeier, head of DFKI’s language tech laboratory. But today, the lab’s programs can sift through massive amounts of text on the internet and analyse it statistically. “In the old days we had to manually program grammar rules,” Schmeier says. “Now our programs can scan the internet – a process called crawling – and teach themselves to read by identifying linguistic patterns.” Once the AI software can read, it can be trained to discern different kinds of content: “You basically feed it with text snippets on different topics, so it learns to differentiate and, for example, filter out hate speech on social media.” Analysing social media can indeed be a handy tool, as is demonstrated by another DFKI project called ‘Smart Data for Mobility’. The platform collects information from transportation companies such as DB and airlines, and uses language technology to precisely protract data from Twitter or Facebook posts. If there’s something unexpected blocking traffic, the platform’s AI quickly identifies where the blockage is, and calculates the best alternative routes. DB has already adopted this method to optimise logistics. (So no need to worry about those delays.) Schmeier describes Berlin as a unique hub for computer science research. Apart from the Humboldt, the Technical and the Free University and research institutions such as the Fraunhofer Institute and the Helmholtz Association, he praises Berlin’s active start-up scene: “It is very much up and coming but not yet too established for new ideas to flow freely.” Still, when it comes to more autonomous, general AI, he is sceptical, saying that he doesn’t know of a single research institute attempting to develop such software: “Strong AI in the form of an artificial being that has its own wishes and desires is just a science-fiction idea,” he says. “Of course, we do not build that; at this point we don’t even have a concept of what real intelligence is.” The future of autonomous driving With BMW and Daimler funnelling hundreds of millions of dollars into the research and development of self-driving cars, automated driving is surely just around the corner. But how soon are these smart pods going to hit Berlin streets? According to Oliver Beck of Berlin software company Hella Aglaia, it will take at least another 30 years until automated cars become affordable for the normal customer. Aglaia, bought in 2006 by car part company Hella, has developed leading software that can automatically recognise pedestrians, dim the car lights and warn a driver if they are getting too close to another car. The software is currently used by cars to keep drivers within the right lane. If you cross a white line, the AI will have the car make a swerve and get you back on track. “We are working a lot on making our AI safe, but in the end, you are still responsible,” says Beck. “The car will go in one direction softly but you can still steer against it.” How that works in situations when every millisecond counts is unclear, and can, as in the case of Tesla and Uber’s adoption of AI technology in the US, result in fatal accidents. At some point, the integrated AI system would have to make decisions on what Beck calls “corner cases”, concerning life and death. In the case of a fatal collision, who should the AI choose: the driver or the pedestrian? And what criteria would this be based on? Should young people’s lives be favoured over old people’s? So far, Hella Aglaia has not employed any AI ethics specialists. But on the issue of accountability, the German law is clear: since June last year, cars with self-driving functions are allowed on streets here, the main condition being the presence of a steering wheel and a person sitting behind it. So, whatever happens, the driver remains responsible. Meanwhile, Hella Aglaia’s researchers try to understand what their own software has learned in order to improve it: “I don’t know if the AI recognises the stop sign because of its colour or shape, or if it developed criteria I can’t even imagine,” says machine learning engineer Marc Zerwas. From the gigantic amount of data input, the software deducts its own benchmarks and becomes almost impenetrable even to its makers. “If you were teaching a child, it could tell you what it’s looking at or thinking. Algorithms can’t do that. A big part of my work is trying to understand what the algorithms have taught themselves.” Although somewhat opaque, algorithms are far from actually being intelligent. The AI software can only recognise patterns when a scenario repeatedly comes up in the data sets. Zerwas explains: “What we are doing is beating data into them with brute force, the human brain learns much more elegantly than that.” Perhaps AI hasn’t outsmarted us quite yet.