Dr LeCuns artificial visual cortex, by contrast, lights on the appropriate filters automatically as it is taught to distinguish the different types of object. When an image is fed into the unprimed system and processed, the chances are it will not, at first, be assigned to the right category. But, shown the correct answer, the system can work its way back, modifying its own parameters so that the next time it sees a similar image it will respond appropriately. After enough trial runs, typically 10,000 or more, it makes a decent fist of recognising that class of objects in unlabelled images.
This still requires human input, though. The next stage is unsupervised learning, in which instruction is entirely absent. Instead, the system is shown lots of pictures without being told what they depict. It knows it is on to a promising filter when the output image resembles the input. In a computing sense, resemblance is gauged by the extent to which the input image can be recreated from the lower-resolution output. When it can, the filters the system had used to get there are retained.
In a tribute to natures nous, the lowest-level filters arrived at in this unaided process are edge-seeking ones, just as in the brain. The top-level filters are sensitive to all manner of complex shapes. Caltech-101, a database routinely used for vision research, consists of some 10,000 standardised images of 101 types of just such complex shapes, including faces, cars and watches. When a ConvNet with unsupervised pre-training is shown the images from this database it can learn to recognise the categories more than 70% of the time. This is just below what top-scoring hand-engineered systems are capable ofand those tend to be much slower.
【雅思阅读材料:Eye robot】相关文章:
最新
2016-02-26
2016-02-26
2016-02-26
2016-02-26
2016-02-26
2016-02-26