In response to the European Commission's wish to develop “ethical” artificial intelligence in Europe, the Parisian startup Golem.ai claims to develop more transparent tools.
franceinfo: how is your artificial intelligence more ethical than that of others?
Thomas Solignac, co-founder of Golem.ai: Machine learning, widely used in AI today, is a “black box” approach because you can't look inside and you don't understand what AI does. When you know that AI is starting to drive cars or make medical decisions, it seems necessary to have real transparency. At Golem.ai, we develop, in particular, automatic email management tools, in order to improve customer relations, and, unlike a purely statistical approach, we do “symbolic AI”, which is a form of mathematization of reasoning. This gives a much clearer AI.
Is it not the same as the “old” expert systems, which were only successive decisions programmed in advance?
It looks like it, but we add to it elements from the humanities, such as linguistics, psychology or even philosophy. Personally, I studied computer science and philosophy, and I use philosophy a lot to feed my approach to computer science.
Does working on these technologies make you feel like you have power in your hands?
I have the feeling that there is a new door open to the scale of our history. There are huge positive potentials in AI, but there are also risks to be wary of. The impact on the world of work is enormous.