Saturday, November 17, 2018

AI SPECIAL..... How AI is helping Amazon become a trillion-dollar company PART I


 How AI is helping Amazon become a trillion-dollar company PART I

An exclusive look at how AI shapes every aspect of Amazon’s business, from its warehouses full of products to your Echo smart speaker.

Swami Sivasubramanian lives in a wooded area in the Seattle suburbs that’s a favorite with opportunistic local bears. From time to time, usually on garbage night, the animals wander into Sivasubramanian’s backyard to pillage his trash. But try as they might, he and his family had never managed to spot the intruders.
“My wife really wanted to see these bears in action,” says Sivasubramanian, Amazon’s VP of machine learning. “She will always try to stay up looking for bears to visit, and she wants me to give her company.”
Sivasubramanian cops to being kind of lazy on that front. But as a technologist, he’s much more proactive. He founded his solution in DeepLens, a new video camera system from Amazon Web Services that lets anyone with programming skills employ deep learning to automate various tasks. DeepLens let him placate his wife by building “a machine learning model that actually detects a bear and sends a text to her phone, so that she can wake up, saying, ‘Hey, a bear is right there digging up the trash,’ ” he says.
DeepLens can perform plenty of other machine-vision tricks, such as figuring out if food is a hot dog or not a hot dog (yes, that’s a Silicon Valley reference). It can also transfer an artistic style from one image to an entire video sequence. It’s just one of a myriad of ways that Amazon is utilizing AI and machine learning across its many businesses, both for carrying out internal processes and for improving customers’ experience.

Since its earliest days, Amazon has used AI to come up with product recommendations based on what users already said they liked. The algorithms behind those systems have been tweaked again and again over the years. These days, thanks to machine learning, the recommendations have gotten more dynamic, says Jeff Wilke, the CEO of Amazon’s worldwide consumer division. “Say there’s a new piece of fashion that comes into the fall season,” he explains, “In the past it might take longer for the algorithms that we use to realize that people who bought these shoes also bought this top. And with some of the new techniques we can detect those things earlier, those correlations. And then surface the new top earlier in the season.”
The Echo Dot–and every Alexa-powered device–is infused with Amazon AI. [Photo: courtesy of Amazon]Other Amazon AI and machine-learning efforts power the Alexa voice assistant, give users of Amazon Web Services access to cloud-based tools, allow shoppers to grab items and walk immediately out of Amazon Go stores, guide robots carrying shelves full of products directly to fulfillment-center workers, and much more. And while the technology is vital to Amazon across most of its businesses, the range of its applications is still stunning. It’s also a key reason why the company (briefly) hit $1 trillion in market cap, and stands every chance of getting back there for the long haul.
A company-wide mantra at Amazon is that every day is “Day One,” a humble contention that for all Jeff Bezos’s brainchild has accomplished, it’s just getting started. When it comes to AI and machine learning, Sivasubramanian doesn’t just pull out the standard “Day One” reference. He jokes that “it’s Day One, but it’s so early that we just woke up and haven’t even had a cup of coffee yet.”

DANCE OF THE ROBOTS
Deep inside Amazon’s 855,000-square-foot fulfillment center in Kent, Washington, 18 miles south of Seattle, a bunch of orange Amazon robots are doing a dance. Balanced on top of each of the orange machines is a yellow pod with nine rows of product-packed shelves on each of four sides. Powered by AI, each of the robots automatically sprang into action when someone somewhere in the Pacific Northwest purchased something on Amazon.com, and each is now autonomously maneuvering its way around the others in a bid to get to a station at the edge of the fenced-off robotic field where a worker will pluck the item in question and put it on a conveyor belt toward another worker who will box it up.
At the scale that Amazon processes orders, peak efficiency is essential. Magnified over millions upon millions of orders a year, even a second or two saved per order makes a huge bottom-line difference.
For some time, Amazon has used machine learning in its fulfillment centers “to improve our ability to predict what customers are ordering and place it in the right place,” says Wilke, “And also to improve the efficiency and speed with which we get things to consumers.”
It might not seem all that sexy, but a recent AI-based innovation that allows workers in those fulfillment centers to skip one manual item scanning step per order is a big win for the company. The new technique is being applied to Amazon’s long-standing stowing process, which lets workers store items that have arrived from distributors and manufacturers anywhere on a warehouse’s shelves–so long as their location is recorded in a computer so that they can be found again on the first try. The method which has been in use has involved workers grabbing an item out of a box, using a bar-code scanner to scan it, placing it on a shelf, and then scanning the shelf. The dual scanning associates the item with its location.
Now, thanks to a combination of advanced computer vision and machine-learning technology, workers will be able to simply pick up an item in both hands, slide it under a scanner mounted nearby and place it in a bin. The system is smart enough to recognize where the item was placed and record it for future reference, without the worker having to scan the bin.
Brad Porter, Amazon Robotics’ VP of engineering at Amazon Robotics, says that freeing up the hand that would have been used to wield a bar-code scanner is a big boon to efficiency. “After about five minutes of doing it myself, I realized that I could pick up five or six small items… hold them in my left hand, grab one, scan it, put it in, grab one, scan it, put it in,” he says. “It’s super natural, super easy.”
The new system, which took about 18 months to develop, uses computer vision and machine learning algorithms to evaluate how a worker is touching items and determine when those items have been placed in a bin. Porter characterized the algorithms as among the “more sophisticated” news Amazon is using, given the need to tell whether a worker is holding up an item alongside a bin or actually placing it inside one. The system has to be able to work in different lighting conditions, and regardless of how full the bins are–something that can vary dramatically depending on time of year.
In recent weeks, Amazon has turned the new system on at its Milwaukee fulfillment center and is getting ready to do the same in about 10 other centers. Given that any changed methods must not introduce inefficiencies in Amazon’s fulfillment centers without a massive negative impact, Porter’s team had to be sure the new innovation was ready. They asked, “Are we going to turn the [system] on for peak [holiday season] this year,” he says, “and we pretty much made the decision that we’re ready to go.”
It’s not clear when–or even if–Amazon will roll out the new system at all of its fulfillment centers. Regardless, Porter is already thinking about how to improve it. That boils down to leveraging advances in camera technology and machine-vision processing speed. He imagines upgrading the system with more cameras involved, making it possible to recognize bar codes on a package without the worker even having to orient it towards a scanner. It might only save half a second per item, but at Amazon’s scale, that makes it very sexy indeed.
CONTINUES IN PART II

No comments: