Computer vision, that is.
We went over this in class a little bit, mostly stuff I was already aware off, but I figured it might be interesting for you folks too. I mean, soon Google’s self-driving cars will be hitting the roads yet computers still struggle to tell a bird from a tree, so what gives? What gives is the problem that has plagued automation ever since slavery got abolished: computers aren’t humans.
It’s been a couple of years since we humans first started developing, with our new-fangled big brains and the intellect that came with them. It was great, we could see things miles away and spot danger in the dark. We finally had a fighting chance! This and that happened, some folks decided to step up their game, and here we are today. Computers didn’t walk that road. They never developed stuff like instinct to make rapid decisions, or pattern recognition to filter out important information from a whole bunch of data. So we have to teach them.
And the teaching’s coming a long way. Neural networks aim to simulate our human kind of learning digitally, showing it “right” and “wrong” often enough for it to learn the difference between the two. Computers are still handicapped though. As advanced as their hardware is, it can’t match the brilliant creation that is the human body. In this specific case, our eyes. They adapt to everything you throw at them, and what they can’t do themselves, the brain handles. You can’t thrown a camera into a dark room and expect it to see what a human would see, and bright things don’t play nicely either.
Conclusion: making computers do human things is hard because they weren’t made for it, just like we weren’t made to do computer things.