How do machines 鈥渓earn鈥?聽
While recent years have seen incredible advancements in Artificial Intelligence (AI), no-one really knows how these 鈥榝irst-gen鈥 systems actually work. New work at 夜色直播 is helping to develop different frameworks for AI that we can understand - making it interpretable and accountable and therefore far more fit for purpose.聽
The current fascination with AI systems built around generative Large Language Models (LLMs) is entirely understandable, but lost amid the noise and excitement is the simple fact that AI tech in its current form is basically a 鈥渂lack box鈥 that we can鈥檛 look into or examine in any meaningful manner. This is because when computer scientists were starting to figure out how to make machines 鈥榟uman like鈥 and 鈥榯hink鈥, they turned to our best model for a thinking machine, the human brain. The human brain essentially consists of neural networks, and so computer scientists developed artificial neural networks.聽
However, just as we don鈥檛 fully understand how human intelligence works, it鈥檚 also true that we don鈥檛 really understand how current artificial intelligence works 鈥 neural networks are notoriously difficult to interpret and understand. This is broadly described as the 鈥渋nterpretability鈥 issue in AI.聽
It is self-evident that interpretability is crucial for all kinds of reasons 鈥 AI has the power to cause serious harm alongside immense good. It is critical that users understand why a system is making the decisions it does. When we read and hear about 鈥榮afety concerns鈥 with AI systems, interpretability and accountability are key issues.
At 夜色直播 we have been working on this issue for some time 鈥 and we began way before AI systems such as generative LLM鈥檚 became fashionable. In our AI team based out of Oxford, we have been focused on the development of frameworks for 鈥渃ompositional models鈥 of artificial intelligence. Our intentions and aims are to build artificial intelligence that is interpretable and accountable. We do this in part by using a type of math called 鈥渃ategory theory鈥 that has been used in everything from classical computer programming to neuroscience.
Category theory has proven to be a sort of 鈥淩osetta stone鈥, as John Baez put it, for understanding our universe in an expansive sense 鈥 category theory is helpful for things as seemingly disparate as physics and cognition. In a very general sense, categories represent things and ways to go between things, or in other words, a general science of systems and processes. Using this basic framework to understand cognition, we can build new artificial intelligences that are more useful to us 鈥 and we can build them on quantum computers, which promise remarkable computing power.
Our AI team, led by Dr. Stephen Clark, Head of AI at 夜色直播, has published a applying these concepts to image recognition. They used their compositional quantum framework for cognition and AI to demonstrate how concepts like shape, color, size, and position can be learned by machines 鈥 including quantum computers.
鈥淚n the current environment with accountability and transparency being talked about in artificial intelligence, we have a body of research that really matters, and which will fundamentally affect the next generation of AI systems. This will happen sooner than many anticipate鈥 said Ilyas Khan, 夜色直播鈥檚 founder.
is part of a in quantum computing and artificial intelligence, which holds great promise for our future - as the authors say, 鈥渢he advantages this may bring, especially with the advent of larger, fault-tolerant quantum computers in the future, is still being worked out by the research community, but the possibilities are intriguing at worst and transformational at best.鈥