What's up in
Neural networks
Latest Articles
Novel Architecture Makes Neural Networks More Understandable
By tapping into a decades-old mathematical principle, researchers are hoping that Kolmogorov-Arnold networks will facilitate scientific discovery.
Are Robots About to Level Up?
Today’s AI largely lives in computers, but acting and reacting in the real world — that’s the realm of robots. In this week’s episode, co-host Steven Strogatz talks with pioneering roboticist Daniela Rus about creativity, collaboration, and the unusual forms robots of the future might take.
Will AI Ever Have Common Sense?
Common sense has been viewed as one of the hardest challenges in AI. That said, ChatGPT4 has acquired what some believe is an impressive sense of humanity. How is this possible? Listen to this week’s “The Joy of Why” with co-host Steven Strogatz.
What Is Machine Learning?
Neural networks and other forms of machine learning ultimately learn by trial and error, one improvement at a time.
Computation Is All Around Us, and You Can See It if You Try
Computer scientist Lance Fortnow writes that by embracing the computations that surround us, we can begin to understand and tame our seemingly random world.
AI Needs Enormous Computing Power. Could Light-Based Chips Help?
Optical neural networks, which use photons instead of electrons, have advantages over traditional systems. They also face major obstacles.
Game Theory Can Make AI More Correct and Efficient
Researchers are drawing on ideas from game theory to improve large language models and make them more consistent.
Does AI Know What an Apple Is? She Aims to Find Out.
The computer scientist Ellie Pavlick is translating philosophical concepts such as “meaning” into concrete, testable ideas.
AI Starts to Sift Through String Theory’s Near-Endless Possibilities
Using machine learning, string theorists are finally showing how microscopic configurations of extra dimensions translate into sets of elementary particles — though not yet those of our universe.