Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain
the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in
Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles
and JavaScript.
Photonic computing devices are a compelling alternative to conventional computing setups for machine learning applications, as they are nonlinear, fast and easy to parallelize. Recent work demonstrates the potential of these optical systems to process and classify human motion from video.
Origami engineering has long held the promise of complex and futuristic machines. A new foldable haptics system shows that this paradigm can be functional as well.
Loss-of-function mutations in metal-binding proteins are heavily implicated with numerous diseases, and identifying such ‘cracks’ will be valuable to biologists and medical doctors in the study and treatment of disease. A deep learning approach has been developed to tackle this challenging task.
In cooperative games, humans are biased against AI systems even when such systems behave better than our human counterparts. This raises a question: should AI systems ever be allowed to conceal their true nature and lie to us for our own benefit?
Current national cybersecurity and defence strategies of several governments mention explicitly the use of AI. However, it will be important to develop standards and certification procedures, which involves continuous monitoring and assessment of threats. The focus should be on the reliability of AI-based systems, rather than on eliciting users’ trust in AI.
Adversarial attacks make imperceptible changes to a neural network’s inputs so that it recognizes it as something entirely different. This flaw can give us insight into how these networks work and how to make them more robust.
AI ethics initiatives have seemingly converged on a set of principles that closely resemble the four classic principles of medical ethics. Despite this, Brent Mittelstadt highlights important differences between medical practice and AI development that suggest a principled approach may not work in the case of AI.
Robots and machines are generally designed to perform specific tasks. Unlike humans, they lack the ability to generate feelings based on interactions with the world. The authors propose a new class of machines with evaluation processes akin to feelings, based on the principles of homeostasis and developments in soft robotics and multisensory integration.
As AI technology develops rapidly, it is widely recognized that ethical guidelines are required for safe and fair implementation in society. But is it possible to agree on what is ‘ethical AI’? A detailed analysis of 84 AI ethics reports around the world, from national and international organizations, companies and institutes, explores this question, finding a convergence around core principles but substantial divergence on practical implementation.
DeepMind’s AlphaFold recently demonstrated the potential of deep learning for protein structure prediction. DeepFragLib, a new protein-specific fragment library built using deep neural networks, may have advanced the field to the next stage.
Traditional robotic grasping focuses on manipulating an object, often without considering the goal or task involved in the movement. The authors propose a new metric for success in manipulation that is based on the task itself.
To prepare robots for working autonomously under real-world conditions, their resilience and capability to recover from damage needs to improve radically. A fresh take on robot design suggests that instead of adapting the robotic control strategy, we could enable robots to change their physical bodies to recover more effectively from damage.
Classical statistical analysis in many empirical sciences has lagged behind modern trends in analytics for large-scale datasets. The authors discuss the influence of more variables, larger sample sizes, open data sources for analysis and assessment, and ‘black box’ prediction methods on the empirical sciences, and provide examples from imaging neuroscience.
There has been a recent rise of interest in developing methods for ‘explainable AI’, where models are created to explain how a first ‘black box’ machine learning model arrives at a specific decision. It can be argued that instead efforts should be directed at building inherently interpretable models in the first place, in particular where they are applied in applications that directly affect human lives, such as in healthcare and criminal justice.
Classic theories of reinforcement learning and neuromodulation rely on reward prediction errors. A new machine learning technique relies on neuromodulatory signals that are optimized for specific tasks, which may lead to better AI and better explanations of neuroscience data.
Artificial intelligence and machine learning systems may reproduce or amplify biases. The authors discuss the literature on biases in human learning and decision-making, and propose that researchers, policymakers and the public should be aware of such biases when evaluating the output and decisions made by machines.
Humans infer much of the intentions of others by just looking at their gaze. Similarly, we want to understand how machine learning systems solve a problem. New tools are developed to find out what strategies a learning machine is using, such as what it is paying attention to when classifying images.
Research on reinforcement learning in artificial agents focuses on a single complex problem within a static environment. In biological agents, research focuses on simple learning problems embedded in flexible, dynamic environments. The authors review the literature on these topics and suggest areas of synergy between them.
A bibliometric analysis of the past and present of AI research suggests a consolidation of research influence. This may present challenges for the exchange of ideas between AI and the social sciences.
A survey of 300 fictional and non-fictional works featuring artificial intelligence reveals that imaginings of intelligent machines may be grouped in four categories, each comprising a hope and a parallel fear. These perceptions are decoupled from what is realistically possible with current technology, yet influence scientific goals, public understanding and regulation of AI.