On the left is an image that was put through a neural network trained to classify objects in imagesfor example, to tell whether an image includes a vase or a lemon. On the right is a visualization of what one layer in the middle of the network detected at each position of the image. The neural network seems to be detecting vase-like patterns and lemon-like objects.
Credit: The Building Blocks of Interpretability
Machines are starting to learn tasks on their own. They are identifying faces, recognizing spoken words, reading medical scans and even carrying on their own conversations.
From The New York Times
View Full Article
No entries found