Check out our innovative home and kitchen tools to make cooking and beverages more enjoyable

Style Switcher

Predefined Colors

How Data Helps Deliver Your Dinner On Timeand Warm

Guidebooks highlight San Francisco’s Hayes Valley neighborhood for its lively bars and restaurants, nurtured by the removal of an earthquake-damaged freeway and swelling tech industry salaries. At Uber’s headquarters nearby, data scientists working on the company’s food delivery service, Uber Eats, view the scene through a more numerical lens.

Their logs indicates that restaurants in the area take an average of 12 minutes and 36 seconds to prepare an evening order of pad thai—that’s 3 minutes and 2 seconds faster than in the Mission District to the south. That stat may seem obscure, but it’s at the heart of Uber’s bid to build a second giant business to stand alongside its ride-hailing service.

Uber is fighting other well-funded startups and publicly listed GrubHub in the fast-growing market for food delivery apps. Winning market share and making the business profitable depend in part on predicting the future, down to the prep time of each noodle dish. Getting it wrong means cold food, unhappy drivers, or disloyal customers in a ruthlessly competitive market.

The mobile apps of Uber Eats and competitors such as DoorDash list menu items from local restaurants. When a user places an order, the delivery service passes it along to the restaurant. The service tries to dispatch a driver to arrive just as the food is ready, drawing on a pool of independent contractors, like in the ride-hailing business. Meanwhile, the customer is shown a prediction, to the nearest minute, of when their food will arrive.

“The more detail with which we can model the physical world, the more accurate we can be,” says Eric Gu, an engineering manager with Uber Eats’ data team. The company employs meteorologists to help predict the effect of rain or snow on orders and delivery times. To refine its predictions, it also tracks when drivers are sitting or standing still, driving, or walking—joining the growing ranks of employers monitoring their workers’ every move.

Improved accuracy can convert directly into dollars, for example by helping Uber combine orders so that drivers carry multiple meals without any getting cold. Drivers get a small bonus for ferrying multiple orders on one trip. “We can save on delivery costs and pass back some savings to the eater,” Gu says.

Four blocks away, Uber rival DoorDash has its own team of data mavens working on an AI-powered crystal ball for food deliveries. One of their findings is that sunset matters. People tend to order dinner when it’s dusk, meaning they eat later in summer and shift their habits when the clocks change in spring and fall. Like Uber, the company keeps a close eye on sports schedules and weather patterns, while also tracking prep times for the dishes offered at different restaurants. Company data indicates that pad thai takes 2 minutes longer to prepare Friday through Sunday than during the rest of the week, because kitchens are busier.

Rajat Shroff, vice president of product, says DoorDash data also clearly shows the connection between accurate delivery predictions and customer loyalty. “That’s driving a big chunk of our growth,” he says. The company was valued at $7 billion this month by investors who plowed in $400 million of fresh funding.

DoorDash has also been working to better understand what happens in restaurants, for example by connecting its systems with Chipotle’s in-house software so orders can be sent in more smoothly, and DoorDash can track how they’re progressing. The company has built a food-delivery simulator in which past data is replayed to test different scheduling and prediction algorithms. Both DoorDash and Uber use their data to offer drivers more money to head to areas where demand is expected to be strong.

Analytics company Second Measure says credit card data shows that DoorDash overtook Uber Eats for second place in US market share in November, behind GrubHub. As of January, the company says, GrubHub took 43 percent of food-delivery sales, compared with 31 percent for DoorDash and 26 percent for Uber Eats. DoorDash is a customer of Second Measure.

Still, DoorDash says it gets orders to customers in an average of 35 minutes. That’s slightly slower than the 31 minutes Janelle Sallenave, head of Uber Eats for the US and Canada, says her service averages for the US.

Uber’s data scientists have a potentially big advantage over their competitors: the rich live and historical traffic data from the company’s ride-hailing network. The company is also digging more deeply into its data on restaurants and Uber Eats drivers.

One project involves analyzing the language on restaurant menus. The goal is to have algorithms predict prep times for dishes it doesn’t yet have good data about by pulling data from menu items that involve similar ingredients and cooking processes.

Chris Muller, a professor at Boston University, says the data-centric view of dining taken by Uber Eats and its competitors is helping to drive a major upheaval of the restaurant business. “This is the biggest single transformation since we saw the growth of fast casual” chains like Chipotle that promise speedy meals of higher quality than fast food.

Joe Hargrave, who grew a farmers’ market stand into five Bay Area taco shops, is living through the food app transformation. He designed his Tacolicious stores for people who share his love of good food you can eat with your fingers while watching baseball. Now, more of his customers are eating their tacos at home, and delivery has become a lifeline.

Orders via apps including DoorDash and Caviar make up about 12 percent of Hargrave’s business, he says. They’ve helped revenue grow 8 percent over the past year, even while in-store business shrank by roughly a quarter. He appreciates what the apps do, but accommodating the delivery boom hasn’t been easy.

“I’ve spent my whole career trying to figure out how to put the best product in front of people,” Hargrave says. “Now I’ve been thrown this curveball where I have to put it in a box.” Tacolicious switched its register system to better handle delivery orders without compromising in-store service. There’s now often a person in each restaurant working exclusively on packaging and checking delivery orders.

Muller and Hargrave say the app-and-algorithm approach to dining can squeeze conventional restaurants and could even put some out of business. Uber’s standard cut of each order is 30 percent, a significant bite in a traditionally low-margin industry. Even restaurants like Tacolicious that accommodate delivery services must also serve people who walk in the door.

That’s one reason Uber is encouraging the development of “virtual restaurants,” which operate out of an existing restaurant’s kitchen but sell only via its app. Uber said last year that it was working with more than 800 virtual restaurants in the US; many operate during hours when a restaurant’s main business is slack or closed, allowing more efficient operation and use of the property.

Uber and DoorDash also work with so-called dark kitchens, operations that serve only via delivery apps and can be more efficient and predictable than conventional restaurants. DoorDash operates a 2,000-square-foot kitchen space in the Bay Area that it rents to such operators.

Muller likens the arrival of Uber Eats and others to how online travel sites shook up the hotel industry, forcing hoteliers to adapt their business models to a market where consumers are more engaged, driving more visits, but at lower prices.

How lucrative this new form of restaurant business will be is unclear. Uber has previously said its service is profitable in some cities, but financials released for the last quarter of 2018 didn’t offer detail about Uber Eats. In all, the company said it lost $940 million, 40 percent more than the previous quarter. In the third quarter of 2018, the company said Uber Eats accounted for 17 percent, or $2.1 billion, of its worldwide gross bookings.

GrubHub has been consistently profitable since it went public in 2014 and sold $1.4 billion worth of food in the final quarter of 2018, an increase of 21 percent over the previous year. But it also reported a small loss after a big jump in marketing spending. GrubHub’s management told investors that competition wasn’t harming growth, but analysts interpreted the company’s results as showing how the rise of DoorDash and Uber Eats will put all the delivery apps under pressure.

Uber and DoorDash both declined to provide more detail about their businesses but are rapidly expanding their reach. DoorDash says it covers 80 percent of the US population, and Uber Eats claims to have reached more than 70 percent, in addition to serving more than 100 cities in Africa, Asia, and Europe. Sallenave, the Uber Eats head for the US and Canada, predicts eating via app will become the norm everywhere, not just in urban areas. “We fundamentally believe we can make this business economically viable, not only in large cities but also in small towns and in the suburbs,” she says.


Read more: https://www.wired.com/story/how-data-helps-deliver-your-dinner-on-time-warm/

Read More

Banuba raises $7M to supercharge any app or device with the ability to really see you

Walking into the office of Viktor Prokopenya — which overlooks a central London park — you would perhaps be forgiven for missing the significance of this unassuming location, just south of Victoria Station in London. While giant firms battle globally to make augmented reality a “real industry,” this jovial businessman from Belarus is poised to launch a revolutionary new technology for just this space. This is the kind of technology some of the biggest companies in the world are snapping up right now, and yet, scuttling off to make me a coffee in the kitchen is someone who could be sitting on just such a company.

Regardless of whether its immediate future is obvious or not, AR has a future if the amount of investment pouring into the space is anything to go by.

In 2016 AR and VR attracted $2.3 billion worth of investments (a 300 percent jump from 2015) and is expected to reach $108 billion by 2021 — 25 percent of which will be aimed at the AR sector. But, according to numerous forecasts, AR will overtake VR in 5-10 years.

Apple is clearly making headway in its AR developments, having recently acquired AR lens company Akonia Holographics and in releasing iOS 12 this month, it enables developers to fully utilize ARKit 2, no doubt prompting the release of a new wave of camera-centric apps. This year Sequoia Capital China, SoftBank invested $50 million in AR camera app Snow. Samsung recently introduced its version of the AR cloud and a partnership with Wacom that turns Samsung’s S-Pen into an augmented reality magic wand.

The IBM/Unity partnership allows developers to integrate into their Unity applications Watson cloud services such as visual recognition, speech to text and more.

So there is no question that AR is becoming increasingly important, given the sheer amount of funding and M&A activity.

Joining the field is Prokopenya’s “Banuba” project. For although you can download a Snapchat-like app called “Banuba” from the App Store right now, underlying this is a suite of tools of which Prokopenya is the founding investor, and who is working closely to realize a very big vision with the founding team of AI/AR experts behind it.

The key to Banuba’s pitch is the idea that its technology could equip not only apps but even hardware devices with “vision.” This is a perfect marriage of both AI and AR. What if, for instance, Amazon’s Alexa couldn’t just hear you? What if it could see you and interpret your facial expressions or perhaps even your mood? That’s the tantalizing strategy at the heart of this growing company.

Better known for its consumer apps, which have been effectively testing their concepts in the consumer field for the last year, Banuba is about to move heavily into the world of developer tools with the release of its new Banuba 3.0 mobile SDK. (Available to download now in the App Store for iOS devices and Google Play Store for Android.) It’s also now secured a further $7 million in funding from Larnabel Ventures, the fund of Russian entrepreneur Said Gutseriev, and Prokopenya’s VP Capital.

This move will take its total funding to $12 million. In the world of AR, this is like a Romulan warbird de-cloaking in a scene from Star Trek.

Banuba hopes that its SDK will enable brands and apps to utilise 3D Face AR inside their own apps, meaning users can benefit from cutting-edge face motion tracking, facial analysis, skin smoothing and tone adjustment. Banuba’s SDK also enables app developers to utilise background subtraction, which is similar to “green screen” technology regularly used in movies and TV shows, enabling end-users to create a range of AR scenarios. Thus, like magic, you can remove that unsightly office surrounding and place yourself on a beach in the Bahamas…

Because Banuba’s technology equips devices with “vision,” meaning they can “see” human faces in 3D and extract meaningful subject analysis based on neural networks, including age and gender, it can do things that other apps just cannot do. It can even monitor your heart rate via spectral analysis of the time-varying color tones in your face.

It has already been incorporated into an app called Facemetrix, which can track a child’s eyes to ascertain whether they are reading something on a phone or tablet or not. Thanks to this technology, it is possible to not just “track” a person’s gaze, but also to control a smartphone’s function with a gaze. To that end, the SDK can detect micro-movements of the eye with subpixel accuracy in real time, and also detects certain points of the eye. The idea behind this is to “Gamify education,” rewarding a child with games and entertainment apps if the Facemetrix app has duly checked that they really did read the e-book they told their parents they’d read.

If that makes you think of a parallel with a certain Black Mirror episode where a young girl is prevented from seeing certain things via a brain implant, then you wouldn’t be a million miles away. At least this is a more benign version…

Banuba’s SDK also includes “Avatar AR,” empowering developers to get creative with digital communication by giving users the ability to interact with — and create personalized — avatars using any iOS or Android device.Prokopenya says: “We are in the midst of a critical transformation between our existing smartphones and future of AR devices, such as advanced glasses and lenses. Camera-centric apps have never been more important because of this.” He says that while developers using ARKit and ARCore are able to build experiences primarily for top-of-the-range smartphones, Banuba’s SDK can work on even low-range smartphones.

The SDK will also feature Avatar AR, which allows users to interact with fun avatars or create personalised ones for all iOS and Android devices. Why should users of Apple’s iPhone X be the only people to enjoy Animoji?

Banuba is also likely to take advantage of the news that Facebook recently announced it was testing AR ads in its newsfeed, following trials for businesses to show off products within Messenger.

Banuba’s technology won’t simply be for fun apps, however. Inside two years, the company has filed 25 patent applications with the U.S. patent office, and of six of those were processed in record time compared with the average. Its R&D center, staffed by 50 people and based in Minsk, is focused on developing a portfolio of technologies.

Interestingly, Belarus has become famous for AI and facial recognition technologies.

For instance, cast your mind back to early 2016, when Facebook bought Masquerade, a Minsk-based developer of a video filter app, MSQRD, which at one point was one of the most popular apps in the App Store. And in 2017, another Belarusian company, AIMatter, was acquired by Google, only months after raising $2 million. It too took an SDK approach, releasing a platform for real-time photo and video editing on mobile, dubbed Fabby. This was built upon a neural network-based AI platform. But Prokopenya has much bolder plans for Banuba.

In early 2017, he and Banuba launched a “technology-for-equity” program to enroll app developers and publishers across the world. This signed up Inventain, another startup from Belarus, to develop AR-based mobile games.

Prokopenya says the technologies associated with AR will be “leveraged by virtually every kind of app. Any app can recognize its user through the camera: male or female, age, ethnicity, level of stress, etc.” He says the app could then respond to the user in any number of ways. Literally, your apps could be watching you.

So, for instance, a fitness app could see how much weight you’d lost just by using the Banuba SDK to look at your face. Games apps could personalize the game based on what it knows about your face, such as reading your facial cues.

Back in his London office, overlooking a small park, Prokopenya waxes lyrical about the “incredible concentration of diversity, energy and opportunity” of London. “Living in London is fantastic,” he says. “The only thing I am upset about, however, is the uncertainty surrounding Brexit and what it might mean for business in the U.K. in the future.”

London may be great (and will always be), but sitting on his desk is a laptop with direct links back to Minsk, a place where the facial recognition technologies of the future are only now just emerging.

Read more: https://techcrunch.com/2018/11/26/banuba-raises-7m-to-supercharge-any-app-or-device-with-the-ability-to-really-see-you/

Read More

Not hog dog? PixFood lets you shoot and identify food

What happens when you add AI to food? Surprisingly, you don’t get a hungry robot. Instead you get something like PixFood. PixFood lets you take pictures of food, identify available ingredients, and, at this stage, find out recipes you can make from your larder.

It is privately funded.

“There are tons of recipe apps out there, but all they give you is, well, recipes,” said Tonnesson. “On the other hand, PixFood has the ability to help users get the right recipe for them at that particular moment. There are apps that cover some of the mentioned, but it’s still an exhausting process – since you have to fill in a 50-question quiz so it can understand what you like.”

They launched in August and currently have 3,000 monthly active users from 10,000 downloads. They’re working on perfecting the system for their first users.

“PixFood is AI-driven food app with advanced photo recognition. The user experience is quite simple: it all starts with users taking a photo of any ingredient they would like to cook with, in the kitchen or in the supermarket,” said Tonnesson. “Why did we do it like this? Because it’s personalized. After you take a photo, the app instantly sends you tailored recipe suggestions! At first, they are more or le

ss the same for everyone, but as you continue using it, it starts to learn what you precisely like, by connecting patterns and taking into consideration different behaviors.”

In my rudimentary tests the AI worked acceptably well and did not encourage me to eat a monkey. While the app begs the obvious question – why not just type in “corn?” – it’s an interesting use of vision technology that is definitely a step in the right direction.

Tonnesson expects the AI to start connecting you with other players in the food space, allowing you to order corn (but not a monkey) from a number of providers.

“Users should also expect partnerships with restaurants, grocery, meal-kit, and other food delivery services will be part of the future experiences,” he said.

Read more: https://techcrunch.com/2018/09/10/not-hog-dog-pixfood-lets-you-shoot-and-identify-food/

Read More

Barnes & Noble teeters in a post-text world

Barnes & Noble, that once proud anchor to many a suburban mall, is waning. It is not failing all at once, dropping like the savaged corpse of Toys “R” Us, but it also clear that its cultural moment has passed and only drastic measures can save it from joining Waldenbooks and Borders in the great, paper-smelling ark of our book-buying memory. I’m thinking about this because David Leonhardt at The New York Times calls for B&N to be saved. I doubt it can be.

First, there is the sheer weight of real estate and the inexorable slide away from print. B&N is no longer a place to buy books. It is a toy store with a bathroom and a cafe (and now a restaurant?), a spot where you’re more likely to find Han Solo bobbleheads than a Star Wars novel. The old joy of visiting a bookstore and finding a few magical books to drag home is fast being replicated by smaller bookstores where curation and provenance are still important while B&N pulls more and more titles. To wit:

But does all of this matter? Will the written word — what you’re reading right now — survive the next century? Is there any value in a book when VR and AR and other interfaces can recreate what amounts to the implicit value of writing? Why save B&N if writing is doomed?

Indulge me for a moment and then argue in comments. I’m positing that B&N’s failure is indicative of a move towards a post-text society, that AI and new media will redefine how we consume the world and the fact that we see more videos than text on our Facebook feed – ostensibly the world’s social nervous system – is indicative of this change.

First, some thoughts on writing versus film. In his book of essays, Distrust That Particular Flavor, William Gibson writes about the complexity and education and experience needed to consume various forms of media:

The book has been largely unchanged for centuries. Working in language expressed as a system of marks on a surface, I can induce extremely complex experiences, but only in an audience elaborately educated to experience this. This platform still possesses certain inherent advantages. I can, for instance, render interiority of character with an ease and specificity denied to a screenwriter.

But my audience must be literate, must know what prose fiction is and understand how one accesses it. This requires a complexly cultural education, and a certain socioeconomic basis. Not everyone is afforded the luxury of such an education.

But I remember being taken to my first film, either a Disney animation or a Disney nature documentary (I can’t recall which I saw first), and being overwhelmed by the steep yet almost instantaneous learning curve: In that hour, I learned to watch film.

This is a deeply important idea. First, we must appreciate that writing and film offer various value adds beyond linear storytelling. In the book, the writer can explore the inner space of the character, giving you an imagined world in which people are thinking, not just acting. Film — also a linear medium — offers a visual representation of a story and thoughts are inferred by dint of their humanity. We know a character’s inner life thanks to the emotion we infer from their face and body.

This is why, to a degree, the CGI human was so hard to make. Thanks to books, comics, and film we, as humans, were used to giving animals and enchanted things agency. Steamboat Willie mostly thought like us, we imagined, even though he was a mouse with big round ears. Fast-forward to the dawn of CGI humans — think Sid from Toy Story and his grotesque face — and then fly even further into the future Leia looking out over a space battle and mumbling “Hope” and you see the scope of achievement in CGI humans as well as the deep problems with representing humans digitally. A CGI car named Lightning McQueen acts and thinks like us while a CGI Leia looks slightly off. We cannot associate agency with fake humans, and that’s a problem.

Thus we needed books to give us that inner look, that frisson of discovery that we are missing in real life.

But soon — and we can argue that films like Infinity War prove this — there will be no uncanny valley. We will be unable to tell if a human on screen or in VR is real or fake and this allows for an interesting set of possibilities.

First, with VR and other tricks, we could see through a character’s eyes and even hear her thoughts. This interiority, as Gibson writes, is no longer found in the realm of text and is instead an added attraction to an already rich medium. Imagine hopping from character to character, the reactions and thoughts coming hot and heavy as they move through the action. Maybe the story isn’t linear. Maybe we make it up as we go along. Imagine the remix, the rebuild, the restructuring.

Gibson again:

This spreading, melting, flowing together of what once were distinct and separate media, that’s where I imagine we’re headed. Any linear narrative film, for instance, can serve as the armature for what we would think of as a virtual reality, but which Johnny X, eight-year-old end-point consumer, up the line, thinks of as how he looks at stuff. If he discovers, say, Steve McQueen in The Great Escape, he might idly pause to allow his avatar a freestyle Hong Kong kick-fest with the German guards in the prison camp. Just because he can. Because he’s always been able to. He doesn’t think about these things. He probably doesn’t fully understand that that hasn’t always been possible.

In this case B&N and the bookstore don’t need to exist at all. We get the depth of books with the vitality of film melded with the immersion of gaming. What about artisanal book lovers, you argue, they’ll keep things alive because they love the feel of books.

When that feel — the scent, the heft, the old book smell — can be simulated do we need to visit a bookstore? When Amazon and Netflix spend millions to explore new media and are sure to branch out into more immersive forms do you need to immerse yourself in To The Lighthouse? Do we really need the education we once had to gain in order to read a book?

We know that Amazon doesn’t care about books. They used books as a starting point to taking over e-commerce and, while the Kindle is the best system for e-books in existence, it is an afterthought compared to the rest of the business. In short, the champions of text barely support it.

Ultimately what I posit here depends on a number of changes coming all at once. We must all agree to fall headfirst into some share hallucination the replaces all other media. We must feel that that world is real enough for us to abandon our books.

It’s up to book lovers, then, to decide what they want. They have to support and pay for novels, non-fiction, and news. They have to visit small booksellers and keep demand for books alive. And they have to make it possible to exist as a writer. “Publishers are focusing on big-name writers. The number of professional authors has declined. The disappearance of Borders deprived dozens of communities of their only physical bookstore and led to a drop in book sales that looks permanent,” writes Leonhardt and he’s right. There is no upside for text slingers.

In the end perhaps we can’t save B&N. Maybe we let it collapse into a heap like so many before it. Or maybe we fight for a medium that is quickly losing cachet. Maybe we fight for books and ensure that just because the big guys on the block can’t make a bookstore work the rest of us don’t care. Maybe we tell the world that we just want to read.

I shudder to think what will happen if we don’t.

Read more: https://techcrunch.com/2018/05/07/barnes-noble-teeters-in-a-post-text-world/

Read More

Marriott Wants to Be the Amazon of Travel

As part of Marriott’s announcement on Monday that it would hybridize the Marriott Rewards, Starwood Preferred Guest, and Ritz-Carlton Rewards programs this August, the company also relaunched its Moments marketplace, which sells everything from zip-line excursions to sumo wrestling tutorials and cooking classes with master chefs.

For the first time, loyal guests can both earn and redeem points by shopping for these experiences—110,000 of them in all, across 1,000 global destinations. But anyone, regardless of their participation in Marriott Rewards, will also be able to buy a sunset cruise off Marriott’s shelf with good, old-fashioned dollars.

Marriott’s new Moments portal.
Source: Marriott

“The opportunity for us is to expand the travel experience for our members,” David Flueck, Marriott’s senior vice president of loyalty tells Bloomberg. “They’ve come to rely on Marriott for incredible brands and hotels; now we can deliver more to them.”

The pivot, he says, is about growing from a hotel brand to a lifestyle brand—something that Airbnb has already done with its own Experiences platform.

“Every brand in the travel space has to be more full-service,” explains Deanna Ting, hospitality editor at the travel industry website Skift. “It’s not a question of should they do this. For Marriott to compete, they have to do this.”

Here’s what it means for you.

Scale, Not Exclusivity

Marriott hosted a private concert with Keith Urban as part of its announcement this week—an example of the bookable experiences it will now sell on Moments.
Source: Marriott

Marriott’s overhaul of Moments—a platform that previously existed on a much smaller scale—is a direct result of the company’s spring 2017 acquisition of Place Pass, a meta-search site for local experiences.

But of the 100,000 plus experiences it now offers, only 8,000 are exclusive to Marriott and of the company’s own design.

Some of those include VIP access or front-row box seats at venues around the world, which Flueck says will be available “at every show.” Another subset are what Flueck describes as “once in a lifetime” experiences: a cooking class with Daniel Boulud in his private test kitchen, for instance, or surf lessons with legendary wave-chaser Laird Hamilton. Only a few dozen of these opportunities are available globally at any given time on an auction-only basis, selling for anywhere from 7,500 to 352,500 points; they cannotbe purchased with dollars.

A helicopter tour of Chicago.
Photographer: kokouu/iStockphoto

From a business standpoint, Marriott hopes that the “once in a lifetime” experiences will drive people to plan purpose-built trips, while such “local experiences” as the walking tours will cater to travelers planning a vacation or already on the ground.

It’s scale, not exclusivity, that sets Marriott apart. According to Bjorn Hanson, clinical professor at NYU’s Tisch Center for Hospitality and Tourism, it gives Marriott “a positioning advantage that exceeds any other company I could conceive of as a challenger.” Partnerships with Hertz and StubHub expand the scope of Moments further—travelers can use it for everything from their rental cars to private dinners and opera tickets. They can earn points on each of those purchases, whether they use a co-branded credit card or not.

Still, Hanson expressed skepticism that Moments would change consumers’ behavior. “It doesn’t have enough urgency to it,” he says. “I’m not sure this will drive people to make reservations that they otherwise wouldn’t have made.”

Limited Personalization

The Colosseum in Rome, one of the iconic sights that travelers can explore on guided tours booked through Marriott.
Photographer: Marco Rubino /EyeEm

Skift’s Ting says the program will be most successful if Marriott can make it adaptable, such as targeted marketing—a critical concern when the same platform is meant to serve guests of both budget brands like Courtyard by Marriott and such luxury ones as St. Regis. “I would hope they’re not going to steer a top-tier elite member to duck tours,” she jokes.

Harnessing consumer data for personalized service, as Starwood Preferred Guest has always done adeptly, will help. Artificial intelligence could, too. If deployed elegantly, Marriott may algorithmically know where you want to take your next trip—and what you want to do there—before you do. 

“That’s the nice thing about our members,” says Marriott’s Fleuck. “We have 110 million of them, but we have gotten to know them very well over time.” He adds that “being able to deliver the right experience to the right members at the right time” is “absolutely the direction that we’re going in.”

For now, personalization is limited, which means that it’s still cumbersome to sort through Moments’ 110,000 offerings. The company has started to group activities by types within each destination—family fun, great for couples, good eat, and nightlife—but luxury travelers looking for a fully private experience, for instance, may have to sort through clutter before finding what they want.

The End of the Concierge?

Will Moments eclipse the concierge?
Photographer: Chris Ratcliffe

The biggest short-term impact of Moments may be how you, as a traveler, think about concierges and, to a lesser extent, travel agents.

If trustworthiness of concierge recommendations was already an issue, thanks to kickbacks, this will only intensify; Marriott is encouraging its staff to prioritize Moments in their recommendations, despite the fact that the company hasn’t actively vetted or quality controlled the experiences it purchased in the Points Pass acquisition. According to Fleuck, the company will look at user reviews to determine which experiences get cut from the roster.

“The luxury hospitality sector seems to be in an identity crisis right now, because so many traditional markers of luxury are not as essential anymore, including the concierge,” Ting explains. Hanson agrees. “Does the average 27-year-old want to go to a concierge—the person they think of as a white-haired gent in a tuxedo behind a desk—to find out where to go to dinner that night?” he asks, drawing attention to the fact that concierges today are more reservation-makers than recommenders.

Why bother the concierge when all these activities are right at your fingertips?
Source: Marriott

Marriott’s Fleuck sees it less as the end of concierges than an opportunity to redefine the role.

“The expectation is that we can bring our concierges’ immense and extraordinary local knowledge into the Moments platform,” he says. “It’ll take time to get there with 6,500 properties around the world, but we’d like them to really become our partners in this program.”

Travel agents will also have to prove their added value, or risk losing the customer to a seamless, online shopping experience—especially if loyal members believe they can get better value and earn points by booking their entire vacation through Marriott. And with Marriott-owned cruising on the horizon, the potential to book multiple types of vacations, and even port excursions, on one website may well become a reality. (Fleuck did not say that the company would be able to offer a best-price guarantee of activities as it does with direct hotel bookings.)

“It will be an education process,” says Ting about shifting consumer habits. “But gradually, travelers will start to think differently about how they book and plan their trips.”

Adds Hanson, “It redefines the relationship of the traveler with the hotel brand in a way that has never been done before.”

Read more: http://www.bloomberg.com/news/articles/2018-04-17/marriott-wants-to-be-the-amazon-of-travel

Read More

Inside Amazon’s Artificial Intelligence Flywheel

In early 2014, Srikanth Thirumalai met with Amazon CEO Jeff Bezos. Thirumalai, a computer scientist who’d left IBM in 2005 to head Amazon’s recommendations team, had come to propose a sweeping new plan for incorporating the latest advances in artificial intelligence into his division.

He arrived armed with a “six-pager.” Bezos had long ago decreed that products and services proposed to him must be limited to that length, and include a speculative press release describing the finished product, service, or initiative. Now Bezos was leaning on his deputies to transform the company into an AI powerhouse. Amazon’s product recommendations had been infused with AI since the company’s very early days, as had areas as disparate as its shipping schedules and the robots zipping around its warehouses. But in recent years, there has been a revolution in the field; machine learning has become much more effective, especially in a supercharged form known as deep learning. It has led to dramatic gains in computer vision, speech, and natural language processing.

In the early part of this decade, Amazon had yet to significantly tap these advances, but it recognized the need was urgent. This era’s most critical competition would be in AI—Google, Facebook, Apple, and Microsoft were betting their companies on it—and Amazon was falling behind. “We went out to every [team] leader, to basically say, ‘How can you use these techniques and embed them into your own businesses?’” says David Limp, Amazon’s VP of devices and services.

Thirumalai took that to heart, and came to Bezos for his annual planning meeting with ideas on how to be more aggressive in machine learning. But he felt it might be too risky to wholly rebuild the existing system, fine-tuned over 20 years, with machine-learning techniques that worked best in the unrelated domains of image and voice recognition. “No one had really applied deep learning to the recommendations problem and blown us away with amazingly better results,” he says. “So it required a leap of faith on our part.” Thirumalai wasn’t quite ready—but Bezos wanted more. So Thirumalai shared his edgier option of using deep learning to revamp the way recommendations worked. It would require skills that his team didn’t possess, tools that hadn’t been created, and algorithms that no one had thought of yet. Bezos loved it (though it isn’t clear whether he greeted it with his trademark hyena-esque laugh), so Thirumalai rewrote his press release and went to work.

Srikanth Thirumalai, VP of Amazon Search, was among the leaders tasked with overhauling Amazon’s software with advanced machine learning.

Ian C. Bates

Thirumalai was only one of a procession of company leaders who trekked to Bezos a few years ago with six-pagers in hand. The ideas they proposed involved completely different products with different sets of customers. But each essentially envisioned a variation of Thirumalai’s approach: transforming part of Amazon with advanced machine learning. Some of them involved rethinking current projects, like the company’s robotics efforts and its huge data-center business, Amazon Web Services (AWS). Others would create entirely new businesses, like a voice-based home appliance that would become the Echo.

The results have had an impact far beyond the individual projects. Thirumalai says that at the time of his meeting, Amazon’s AI talent was segregated into isolated pockets. “We would talk, we would have conversations, but we wouldn’t share a lot of artifacts with each other because the lessons were not easily or directly transferable,” he says. They were AI islands in a vast engineering ocean. The push to overhaul the company with machine learning changed that.

While each of those six-pagers hewed to Amazon’s religion of “single-threaded” teams—meaning that only one group “owns” the technology it uses—people started to collaborate across projects. In-house scientists took on hard problems and shared their solutions with other groups. Across the company, AI islands became connected. As Amazon's ambition for its AI projects grew, the complexity of its challenges became a magnet for top talent, especially those who wanted to see the immediate impact of their work. This compensated for Amazon's aversion to conducting pure research; the company culture demanded that innovations come solely in the context of serving its customers.

Amazon loves to use the word flywheel to describe how various parts of its massive business work as a single perpetual motion machine. It now has a powerful AI flywheel, where machine-learning innovations in one part of the company fuel the efforts of other teams, who in turn can build products or offer services to affect other groups, or even the company at large. Offering its machine-learning platforms to outsiders as a paid service makes the effort itself profitable—and in certain cases scoops up yet more data to level up the technology even more.

It took a lot of six-pagers to transform Amazon from a deep-learning wannabe into a formidable power. The results of this transformation can be seen throughout the company—including in a recommendations system that now runs on a totally new machine-learning infrastructure. Amazon is smarter in suggesting what you should read next, what items you should add to your shopping list, and what movie you might want to watch tonight. And this year Thirumalai started a new job, heading Amazon search, where he intends to use deep learning in every aspect of the service.

“If you asked me seven or eight years ago how big a force Amazon was in AI, I would have said, ‘They aren’t,’” says Pedro Domingos, a top computer science professor at the University of Washington. “But they have really come on aggressively. Now they are becoming a force.”

Maybe the force.

The Alexa Effect

The flagship product of Amazon’s push into AI is its breakaway smart speaker, the Echo, and the Alexa voice platform that powers it. These projects also sprang from a six-pager, delivered to Bezos in 2011 for an annual planning process called Operational Plan One. One person involved was an executive named Al Lindsay, an Amazonian since 2004, who had been asked to move from his post heading the Prime tech team to help with something totally new. “A low-cost, ubiquitous computer with all its brains in the cloud that you could interact with over voice—you speak to it, it speaks to you,” is how he recalls the vision being described to him.

But building that system—literally an attempt to realize a piece of science fiction, the chatty computer from Star Trek—required a level of artificial intelligence prowess that the company did not have on hand. Worse, of the very few experts who could build such a system, even fewer wanted to work for Amazon. Google and Facebook were snapping up the top talent in the field. “We were the underdog,” Lindsay, who is now a VP, says.

Al Lindsay, the VP of Amazon Alexa Engine, says Amazon was the underdog when trying to recruit AI experts to design and build its voice platform.

Ian C. Bates

“Amazon had a bit of a bad image, not friendly to people who were research oriented,” says Domingos, the University of Washington professor. The company’s relentless focus on the customer, and its culture of scrappiness, did not jibe with the pace of academia or cushy perks of competitors. “At Google you’re pampered,” Domingos says. “At Amazon you set up your computer from parts in the closet.” Worse, Amazon had a reputation as a place where innovative work was kept under corporate wraps. In 2014, one of the top machine-learning specialists, Yann LeCun, gave a guest lecture to Amazon’s scientists in an internal gathering. Between the time he was invited and the event itself, LeCun accepted a job to lead Facebook’s research effort, but he came anyway. As he describes it now, he gave his talk in an auditorium of about 600 people and then was ushered into a conference room where small groups came in one by one and posed questions to him. But when he asked questions of them, they were unresponsive. This turned off LeCun, who had chosen Facebook in part because it agreed to open-source much of the work of its AI team.

Because Amazon didn’t have the talent in-house, it used its deep pockets to buy companies with expertise. “In the early days of Alexa, we bought many companies,” Limp says. In September 2011, it snapped up Yap, a speech-to-text company with expertise in translating the spoken word into written language. In January 2012, Amazon bought Evi, a Cambridge, UK, AI company whose software could respond to spoken requests like Siri does. And in January 2013, it bought Ivona, a Polish company specializing in text-to-speech, which provided technology that enabled Echo to talk.

But Amazon’s culture of secrecy hampered its efforts to attract top talent from academia. One potential recruit was Alex Smola, a superstar in the field who had worked at Yahoo and Google. “He is literally one of the godfathers of deep learning,” says Matt Wood, the general manager of deep learning and AI at Amazon Web Services. (Google Scholar lists more than 90,000 citations of Smola's work.) Amazon execs wouldn’t even reveal to him or other candidates what they would be working on. Smola rejected the offer, choosing instead to head a lab at Carnegie Mellon.

Director of Alexa Ruhi Sarikaya and VP of Amazon Alexa Engine Al Lindsay led an effort to create not only the Echo line of smart speakers, but also a voice service that could work with other company products.

Ian C. Bates

“Even until right before we launched there was a headwind,” Lindsay says. “They would say, ‘Why would I want to work at Amazon—I’m not interested in selling people products!’”

Amazon did have one thing going for it. Since the company works backward from an imagined final product (thus the fanciful press releases), the blueprints can include features that haven’t been invented yet. Such hard problems are irresistible to ambitious scientists. The voice effort in particular demanded a level of conversational AI—nailing the “wake word” (“Hey Alexa!”), hearing and interpreting commands, delivering non-absurd answers—that did not exist.

That project, even without the specifics on what Amazon was building, helped attract Rohit Prasad, a respected speech-recognition scientist at Boston-based tech contractor Raytheon BBN. (It helped that Amazon let him build a team in his hometown.) He saw Amazon’s lack of expertise as a feature, not a bug. “It was green fields here,” he says. “Google and Microsoft had been working on speech for years. At Amazon we could build from scratch and solve hard problems.” As soon as he joined in 2013, he was sent to the Alexa project. “The device existed in terms of the hardware, but it was very early in speech,” he says.

The trickiest part of the Echo—the problem that forced Amazon to break new ground and in the process lift its machine-learning game in general—was something called far field speech recognition. It involves interpreting voice commands spoken some distance from the microphones, even when they are polluted with ambient noise or other aural detritus. One challenging factor was that the device couldn’t waste any time cogitating about what you said. It had to send the audio to the cloud and produce an answer quickly enough that it felt like a conversation, and not like those awkward moments when you’re not sure if the person you’re talking to is still breathing. Building a machine-learning system that could understand and respond to conversational queries in noisy conditions required massive amounts of data—lots of examples of the kinds of interactions people would have with their Echos. It wasn’t obvious where Amazon might get such data.

Various Amazon devices and third-party products now use the Alexa voice service. Data collected through Alexa helps improve the system and supercharges Amazon’s broader AI efforts.

Ian C. Bates

Far-field technology had been done before, says Limp, the VP of devices and services. But “it was on the nose cone of Trident submarines, and it cost a billion dollars.” Amazon was trying to implement it in a device that would sit on a kitchen counter, and it had to be cheap enough for consumers to spring for a weird new gadget. “Nine out of 10 people on my team thought it couldn’t be done,” Prasad says. “We had a technology advisory committee of luminaries outside Amazon—we didn’t tell them what we were working on, but they said, ‘Whatever you do, don’t work on far field recognition!’”

Prasad’s experience gave him confidence that it could be done. But Amazon did not have an industrial-strength system in place for applying machine learning to product development. “We had a few scientists looking at deep learning, but we didn’t have the infrastructure that could make it production-ready,” he says. The good news was that all the pieces were there at Amazon—an unparalleled cloud service, data centers loaded with GPUs to crunch machine-learning algorithms, and engineers who knew how to move data around like fireballs.

His team used those parts to create a platform that was itself a valuable asset, beyond its use in fulfilling the Echo’s mission. “Once we developed Echo as a far-field speech recognition device, we saw the opportunity to do something bigger—we could expand the scope of Alexa to a voice service,” says Alexa senior principal scientist Spyros Matsoukas, who had worked with Prasad at Raytheon BBN. (His work there had included a little-known Darpa project called Hub4, which used broadcast news shows and intercepted phone conversations to advance voice recognition and natural language understanding—great training for the Alexa project.) One immediate way they extended Alexa was to allow third-party developers to create their own voice-technology mini-applications—dubbed “skills”—to run on the Echo itself. But that was only the beginning.

Spyros​ ​Matsoukas​, a senior principal scientist at Amazon, helped turn Alexa into a force for strengthening Amazon’s company-wide culture around AI.

Adam Glanzman

By breaking out Alexa beyond the Echo, the company’s AI culture started to coalesce. Teams across the company began to realize that Alexa could be a useful voice service for their pet projects too. “So all that data and technology comes together, even though we are very big on single-threaded ownership,” Prasad says. First other Amazon products began integrating into Alexa: When you speak into your Alexa device you can access Amazon Music, Prime Video, your personal recommendations from the main shopping website, and other services. Then the technology began hopscotching through other Amazon domains. “Once we had the foundational speech capacity, we were able to bring it to non-Alexa products like Fire TV, voice shopping, the Dash wand for Amazon fresh, and, ultimately, AWS,” Lindsay says.

The AI islands within Amazon were drawing closer.

Another pivotal piece of the company’s transformation clicked into place once millions of customers (Amazon won’t say exactly how many) began using the Echo and the family of other Alexa-powered devices. Amazon started amassing a wealth of data—quite possibly the biggest collection of interactions of any conversation-driven device ever. That data became a powerful lure for potential hires. Suddenly, Amazon rocketed up the list of places where those coveted machine-learning experts might want to work. “One of the things that made Alexa so attractive to me is that once you have a device in the market, you have the resource of feedback. Not only the customer feedback, but the actual data that is so fundamental to improving everything—especially the underlying platform,” says Ravi Jain, an Alexa VP of machine learning who joined the company last year.

So as more people used Alexa, Amazon got information that not only made that system perform better but supercharged its own machine-learning tools and platforms—and made the company a hotter destination for machine-learning scientists.

The flywheel was starting to spin.

A Brainier Cloud

Amazon began selling Echo to Prime customers in 2014. That was also the year that Swami Sivasubramanian became fascinated with machine learning. Sivasubramanian, who was managing the AWS database and analytics business at the time, was on a family trip to India, when due to a combination of jet lag and a cranky infant daughter, he found himself at his computer late into the night fiddling with tools like Google’s Tensorflow and Caffé, which is the machine-learning framework favored by Facebook and many in the academic community. He concluded that combining these tools with Amazon’s cloud service could yield tremendous value. By making it easy to run machine-learning algorithms in the cloud, he thought, the company might tap into a vein of latent demand. “We cater to millions of developers every month,” he says. “The majority are not professors at MIT but developers who have no background in machine learning.”

Swami Sivasubramanian, VP of AI, AWS was among the first to realize the business implications of integrating AI tools into the company’s cloud services.

Ian C. Bates

At his next Jeff Bezos review he came armed with an epic six-pager. On one level, it was a blueprint for adding machine-learning services to AWS. But Sivasubramanian saw it as something broader: a grand vision of how AWS could become the throbbing center of machine-learning activity throughout all of techdom.

In a sense, offering machine learning to the tens of thousands of Amazon cloud customers was inevitable. “When we first put together the original business plan for AWS, the mission was to take technology that was only in reach of a small number of well-funded organizations and make it as broadly distributed as possible,” says Wood, the AWS machine-learning manager. “We’ve done that successfully with computing, storage, analytics, and databases—and we’re taking the exact same approach with machine learning.” What made it easier was that the AWS team could draw on the experience that the rest of the company was accumulating.

AWS’s Amazon Machine Learning, first offered in 2015, allows customers like C-Span to set up a private catalog of faces, Wood says. Zillow uses it to estimate house prices. Pinterest employs it for visual search. And several autonomous driving startups are using AWS machine learning to improve products via millions of miles of simulated road testing.

In 2016, AWS released new machine-learning services that more directly drew on the innovations from Alexa—a text-to-speech component called Polly and a natural language processing engine called Lex. These offerings allowed AWS customers, which span from giants like Pinterest and Netflix to tiny startups, to build their own mini Alexas. A third service involving vision, Rekognition, drew on work that had been done in Prime Photos, a relatively obscure group at Amazon that was trying to perform the same deep-learning wizardry found in photo products by Google, Facebook, and Apple.

These machine-learning services are both a powerful revenue generator and key to Amazon’s AI flywheel, as customers as disparate as NASA and the NFL are paying to get their machine learning from Amazon. As companies build their vital machine-learning tools inside AWS, the likelihood that they will move to competing cloud operations becomes ridiculously remote. (Sorry, Google, Microsoft, or IBM.) Consider Infor, a multibillion-dollar company that creates business applications for corporate customers. It recently released an extensive new application called Coleman (named after the NASA mathematician in Hidden Figures) that allows its customers to automate various processes, analyze performance, and interact with data all through a conversational interface. Instead of building its own bot from scratch, it uses AWS’s Lex technology. “Amazon is doing it anyway, so why would we spend time on that? We know our customers and we can make it applicable to them,” says Massimo Capoccia, a senior VP of Infor.

AWS’s dominant role in the ether also gives it a strategic advantage over competitors, notably Google, which had hoped to use its machine-learning leadership to catch up with AWS in cloud computing. Yes, Google may offer customers super-fast, machine-learning-optimized chips on its servers. But companies on AWS can more easily interact with—and sell to—firms that are also on the service. “It’s like Willie Sutton saying he robs banks because that’s where the money is,” says DigitalGlobe CTO Walter Scott about why his firm uses Amazon’s technology. “We use AWS for machine learning because that’s where our customers are.”

Last November at the AWS re:Invent conference, Amazon unveiled a more comprehensive machine-learning prosthetic for its customers: SageMaker, a sophisticated but super easy-to-use platform. One of its creators is none other than Alex Smola, the machine-learning superstar with 90,000 academic citations who spurned Amazon five years ago. When Smola decided to return to industry, he wanted to help create powerful tools that would make machine learning accessible to everyday software developers. So he went to the place where he felt he’d make the biggest impact. “Amazon was just too good to pass up,” he says. “You can write a paper about something, but if you don’t build it, nobody will use your beautiful algorithm,” he says.

When Smola told Sivasubramanian that building tools to spread machine learning to millions of people was more important than publishing one more paper, he got a nice surprise. “You can publish your paper, too!” Sivasubramanian said. Yes, Amazon is now more liberal in permitting its scientists to publish. “It’s helped quite a bit with recruiting top talent as well as providing visibility of what type of research is happening at Amazon,” says Spyros Matsoukas, who helped set guidelines for a more open stance.

It’s too early to know if the bulk of AWS’s million-plus customers will begin using SageMaker to build machine learning into their products. But every customer that does will find itself heavily invested in Amazon as its machine-learning provider. In addition, the platform is sufficiently sophisticated that even AI groups within Amazon, including the Alexa team, say they intend to become SageMaker customers, using the same toolset offered to outsiders. They believe it will save them a lot of work by setting a foundation for their projects, freeing them to concentrate on the fancier algorithmic tasks.

Even if only some of AWS’s customers use SageMaker, Amazon will find itself with an abundance of data about how its systems perform (excluding, of course, confidential information that customers keep to themselves). Which will lead to better algorithms. And better platforms. And more customers. The flywheel is working overtime.

AI Everywhere

With its machine learning overhaul in place, the company’s AI expertise is now distributed across its many teams—much to the satisfaction of Bezos and his consiglieri. While there is no central office of AI at Amazon, there is a unit dedicated to the spread and support of machine learning, as well as some applied research to push new science into the company’s projects. The Core Machine Learning Group is led by Ralf Herbrich, who worked on the Bing team at Microsoft and then served a year at Facebook, before Amazon lured him in 2012. “It’s important that there’s a place that owns this community” within the company, he says. (Naturally, the mission of the team was outlined in an aspirational six-pager approved by Bezos.)

Part of his duties include nurturing Amazon’s fast-growing machine-learning culture. Because of the company’s customer-centric approach—solving problems rather than doing blue-sky research—Amazon execs do concede that their recruiting efforts will always tilt towards those interested in building things rather than those chasing scientific breakthroughs. Facebook’s LeCun puts it another way: “You can do quite well by not leading the intellectual vanguard.”

But Amazon is following Facebook and Google’s lead in training its workforce to become adept at AI. It runs internal courses on machine-learning tactics. It hosts a series of talks from its in-house experts. And starting in 2013, the company has hosted an internal machine-learning conference at its headquarters every spring, a kind of Amazon-only version of NIPS, the premier academic machine-learning-palooza. “When I started, the Amazon machine-learning conference was just a couple hundred people; now it’s in the thousands,” Herbrich says. “We don’t have the capacity in the largest meeting room in Seattle, so we hold it there and stream it to six other meeting rooms on the campus.” One Amazon exec remarks that if it gets any bigger, instead of calling it an Amazon machine-learning event, it should just be called Amazon.

Herbrich’s group continues to push machine learning into everything the company attempts. For example, the fulfillment teams wanted to better predict which of the eight possible box sizes it should use with a customer order, so they turned to Herbrich’s team for help. “That group doesn’t need its own science team, but it needed these algorithms and needed to be able to use them easily,” he says. In another example, David Limp points to a transformation in how Amazon predicts how many customers might buy a new product. “I’ve been in consumer electronics for 30 years now, and for 25 of those forecasting was done with [human] judgment, a spreadsheet, and some Velcro balls and darts,” he says. “Our error rates are significantly down since we’ve started using machine learning in our forecasts.”

Still, sometimes Herbrich’s team will apply cutting-edge science to a problem. Amazon Fresh, the company’s grocery delivery service, has been operating for a decade, but it needed a better way to assess the quality of fruits and vegetables—humans were too slow and inconsistent. His Berlin-based team built sensor-laden hardware and new algorithms that compensated for the inability of the system to touch and smell the food. “After three years, we have a prototype phase, where we can judge the quality more reliably” than before, he says.

Of course, such advances can then percolate throughout the Amazon ecosystem. Take Amazon Go, the deep-learning-powered cashier-less grocery store in its headquarters building that recently opened to the public. “As a customer of AWS, we benefit from its scale,” says Dilip Kumar, VP of Technology for Amazon Go. “But AWS is also a beneficiary.” He cites as an example Amazon Go’s unique system of streaming data from hundreds of cameras to track the shopping activities of customers. The innovations his team concocted helped influence an AWS service called Kinesis, which allows customers to stream video from multiple devices to the Amazon cloud, where they can process it, analyze it, and use it to further advance their machine learning efforts.

Even when an Amazon service doesn’t yet use the company’s machine-learning platform, it can be an active participant in the process. Amazon’s Prime Air drone-delivery service, still in the prototype phase, has to build its AI separately because its autonomous drones can’t count on cloud connectivity. But it still benefits hugely from the flywheel, both in drawing on knowledge from the rest of the company and figuring out what tools to use. “We think about this as a menu—everybody is sharing what dishes they have,” says Gur Kimchi, VP of Prime Air. He anticipates that his team will eventually have tasty menu offerings of its own. “The lessons we’re learning and problems we’re solving in Prime Air are definitely of interest to other parts of Amazon,” he says.

In fact, it already seems to be happening. “If somebody’s looking at an image in one part of the company, like Prime Air or Amazon Go, and they learn something and create an algorithm, they talk about it with other people in the company,” says Beth Marcus, a principal scientist at Amazon robotics. “And so someone in my team could use it to, say, figure out what’s in an image of a product moving through the fulfillment center.”

Beth Marcus​, senior principal technologist at Amazon Robotics, has seen the benefits of collaborating with the company’s growing pool of AI experts.

Adam Glanzman

Is it possible for a company with a product-centered approach to eclipse the efforts of competitors staffed with the superstars of deep learning? Amazon’s making a case for it. “Despite the fact they’re playing catchup, their product releases have been incredibly impressive,” says Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence. “They’re a world-class company and they’ve created world-class AI products.”

The flywheel keeps spinning, and we haven’t seen the impact of a lot of six-pager proposals still in the pipeline. More data. More customers. Better platforms. More talent.

Alexa, how is Amazon doing in AI?

The answer? Jeff Bezos’s braying laugh.

Read more: https://www.wired.com/story/amazon-artificial-intelligence-flywheel/

Read More

The light and dark of AI-powered smartphones

Analyst Gartner put out a 10-strong listicle this week identifying what it dubbed “high-impact” uses for AI-powered features on smartphones that it suggests will enable device vendors to provide “more value” to customers via the medium of “more advanced” user experiences.

It’s also predicting that, by 2022, a full 80 per cent of smartphones shipped will have on-device AI capabilities, up from just 10 per cent in 2017.

More on-device AI could result in better data protection and improved battery performance, in its view — as a consequence of data being processed and stored locally. At least that’s the top-line takeout.

Its full list of apparently enticing AI uses is presented (verbatim) below.

But in the interests of presenting a more balanced narrative around automation-powered UXes we’ve included some alternative thoughts after each listed item which consider the nature of the value exchange being required for smartphone users to tap into these touted ‘AI smarts’ — and thus some potential drawbacks too.

Uses and abuses of on-device AI

1)   “Digital Me” Sitting on the Device

“Smartphones will be an extension of the user, capable of recognising them and predicting their next move. They will understand who you are, what you want, when you want it, how you want it done and execute tasks upon your authority.”

“Your smartphone will track you throughout the day to learn, plan and solve problems for you,” said Angie Wang, principle research analyst at Gartner. “It will leverage its sensors, cameras and data to accomplish these tasks automatically. For example, in the connected home, it could order a vacuum bot to clean when the house is empty, or turn a rice cooker on 20 minutes before you arrive.”

Hello stalking-as-a-service. Is this ‘digital me’ also going to whisper sweetly that it’s my ‘number one fan’ as it pervasively surveils my every move in order to fashion a digital body-double that ensnares my free will within its algorithmic black box… 

Invasion

Or is it just going to be really annoyingly bad at trying to predict exactly what I want at any given moment, because, y’know, I’m a human not a digital paperclip (no, I am not writing a fucking letter).  

Oh and who’s to blame when the AI’s choices not only aren’t to my liking but are much worse? Say the AI sent the robo vacuum cleaner over the kids’ ant farm when they were away at school… is the AI also going to explain to them the reason for their pets’ demise? Or what if it turns on my empty rice cooker (after I forgot to top it up) — at best pointlessly expending energy, at worst enthusiastically burning down the house.

We’ve been told that AI assistants are going to get really good at knowing and helping us real soon for a long time now. But unless you want to do something simple like play some music, or something narrow like find a new piece of similar music to listen to, or something basic like order a staple item from the Internet, they’re still far more idiot than savant. 

2)   User Authentication

“Password-based, simple authentication is becoming too complex and less effective, resulting in weak security, poor user experience, and a high cost of ownership. Security technology combined with machine learning, biometrics and user behaviour will improve usability and self-service capabilities. For example, smartphones can capture and learn a user’s behaviour, such as patterns when they walk, swipe, apply pressure to the phone, scroll and type, without the need for passwords or active authentications.”

More stalking-as-a-service. No security without total privacy surrender, eh? But will I get locked out of my own devices if I’m panicking and not behaving like I ‘normally’ do — say, for example, because the AI turned on the rice cooker when I was away and I arrived home to find the kitchen in flames. And will I be unable to prevent my device from being unlocked on account of it happening to be held in my hands — even though I might actually want it to remain locked in any particular given moment because devices are personal and situations aren’t always predictable. 

And what if I want to share access to my mobile device with my family? Will they also have to strip naked in front of its all-seeing digital eye just to be granted access? Or will this AI-enhanced multi-layered biometric system end up making it harder to share devices between loved ones? As has indeed been the case with Apple’s shift from a fingerprint biometric (which allows multiple fingerprints to be registered) to a facial biometric authentication system, on the iPhone X (which doesn’t support multiple faces being registered)? Are we just supposed to chalk up the gradual goodnighting of device communality as another notch in ‘the price of progress’?

3)   Emotion Recognition

“Emotion sensing systems and affective computing allow smartphones to detect, analyse, process and respond to people’s emotional states and moods. The proliferation of virtual personal assistants and other AI-based technology for conversational systems is driving the need to add emotional intelligence for better context and an enhanced service experience. Car manufacturers, for example, can use a smartphone’s front camera to understand a driver’s physical condition or gauge fatigue levels to increase safety.”

No honest discussion of emotion sensing systems is possible without also considering what advertisers could do if they gained access to such hyper-sensitive mood data. On that topic Facebook gives us a clear steer on the potential risks — last year leaked internal documents suggested the social media giant was touting its ability to crunch usage data to identify feelings of teenage insecurity as a selling point in its ad sales pitches. So while sensing emotional context might suggest some practical utility that smartphone users may welcome and enjoy, it’s also potentially highly exploitable and could easily feel horribly invasive — opening the door to, say, a teenager’s smartphone knowing exactly when to hit them with an ad because they’re feeling low.

If indeed on-device AI means locally processed emotion sensing systems could offer guarantees they would never leak mood data there may be less cause for concern. But normalizing emotion-tracking by baking it into the smartphone UI would surely drive a wider push for similarly “enhanced” services elsewhere — and then it would be down to the individual app developer (and their attitude to privacy and security) to determine how your moods get used. 

As for cars, aren’t we also being told that AI is going to do away with the need for human drivers? Why should we need AI watchdogs surveilling our emotional state inside vehicles (which will really just be nap and entertainment pods at that point, much like airplanes). A major consumer-focused safety argument for emotion sensing systems seems unconvincing. Whereas government agencies and businesses would surely love to get dynamic access to our mood data for all sorts of reasons…

4)   Natural-Language Understanding

“Continuous training and deep learning on smartphones will improve the accuracy of speech recognition, while better understanding the user’s specific intentions. For instance, when a user says “the weather is cold,” depending on the context, his or her real intention could be “please order a jacket online” or “please turn up the heat.” As an example, natural-language understanding could be used as a near real-time voice translator on smartphones when traveling abroad.”

While we can all surely still dream of having our own personal babelfish — even given the cautionary warning against human hubris embedded in the biblical allegory to which the concept alludes — it would be a very impressive AI assistant that could automagically select the perfect jacket to buy its owner after they had casually opined that “the weather is cold”.

I mean, no one would mind a gift surprise coat. But, clearly, the AI being inextricably deeplinked to your credit card means it would be you forking out for, and having to wear, that bright red Columbia Lay D Down Jacket that arrived (via Amazon Prime) within hours of your climatic observation, and which the AI had algorithmically determined would be robust enough to ward off some “cold”, while having also data-mined your prior outerwear purchases to whittle down its style choice. Oh, you still don’t like how it looks? Too bad.  

The marketing ‘dream’ pushed at consumers of the perfect AI-powered personal assistant involves an awful lot of suspension of disbelief around how much actual utility the technology is credibly going to provide — i.e. unless you’re the kind of person who wants to reorder the same brand of jacket every year and also finds it horribly inconvenient to manually seek out a new coat online and click the ‘buy’ button yourself. Or else who feels there’s a life-enhancing difference between having to directly ask an Internet connected robot assistant to “please turn up the heat” vs having a robot assistant 24/7 spying on you so it can autonomously apply calculated agency to choose to turn up the heat when it overheard you talking about the cold weather — even though you were actually just talking about the weather, not secretly asking the house to be magically willed warmer. Maybe you’re going to have to start being a bit more careful about the things you say out loud when your AI is nearby (i.e. everywhere, all the time). 

Humans have enough trouble understanding each other; expecting our machines to be better at this than we are ourselves seems fanciful — at least unless you take the view that the makers of these data-constrained, imperfect systems are hoping to patch AI’s limitations and comprehension deficiencies by socially re-engineering their devices’ erratic biological users by restructuring and reducing our behavioral choices to make our lives more predictable (and thus easier to systemize). Call it an AI-enhanced life more ordinary, less lived.

5)   Augmented Reality (AR) and AI Vision

“With the release of iOS 11, Apple included an ARKit feature that provides new tools to developers to make adding AR to apps easier. Similarly, Google announced its ARCore AR developer tool for Android and plans to enable AR on about 100 million Android devices by the end of next year. Google expects almost every new Android phone will be AR-ready out of the box next year. One example of how AR can be used is in apps that help to collect user data and detect illnesses such as skin cancer or pancreatic cancer.”

While most AR apps are inevitably going to be a lot more frivolous than the cancer detecting examples being cited here, no one’s going to neg the ‘might ward off a serious disease’ card. That said, a system that’s harvesting personal data for medical diagnostic purposes amplifies questions about how sensitive health data will be securely stored, managed and safeguarded by smartphone vendors. Apple has been pro-active on the health data front — but, unlike Google, its business model is not dependent on profiling users to sell targeted advertising so there are competing types of commercial interests at play.

And indeed, regardless of on-device AI, it seems inevitable that users’ health data is going to be taken off local devices for processing by third party diagnostic apps (which will want the data to help improve their own AI models) — so data protection considerations ramp up accordingly. Meanwhile powerful AI apps that could suddenly diagnose very serious illnesses also raise wider issues around how an app could responsibly and sensitively inform a person it believes they have a major health problem. ‘Do no harm’ starts to look a whole lot more complex when the consultant is a robot.  

6) Device Management

“Machine learning will improve device performance and standby time. For example, with many sensors, smartphones can better understand and learn user’s behaviour, such as when to use which app. The smartphone will be able to keep frequently used apps running in the background for quick re-launch, or to shut down unused apps to save memory and battery.”

Another AI promise that’s predicated on pervasive surveillance coupled with reduced user agency — what if I actually want to keep an app open that I normally close directly or vice versa; the AI’s template won’t always predict dynamic usage perfectly. Criticism directed at Apple after the recent revelation that iOS will slow performance of older iPhones as a technique for trying to eke better performance out of older batteries should be a warning flag that consumers can react in unexpected ways to a perceived loss of control over their devices by the manufacturing entity.   

7) Personal Profiling

“Smartphones are able to collect data for behavioural and personal profiling. Users can receive protection and assistance dynamically, depending on the activity that is being carried out and the environments they are in (e.g., home, vehicle, office, or leisure activities). Service providers such as insurance companies can now focus on users, rather than the assets. For example, they will be able to adjust the car insurance rate based on driving behaviour.”

Insurance premiums based on pervasive behavioral analysis — in this case powered by smartphone sensor data (location, speed, locomotion etc) — could also of course be adjusted in ways that end up penalizing the device owner. Say if a person’s phone indicated they brake harshly quite often. Or regularly exceed the speed limit in certain zones. And again, isn’t AI supposed to be replacing drivers behind the wheel? Will a self-driving car require its rider to have driving insurance? Or aren’t traditional car insurance premiums on the road to zero anyway — so where exactly is the consumer benefit from being pervasively personally profiled? 

Meanwhile discriminatory pricing is another clear risk with profiling. And for what other purposes might a smartphone be utilized to perform behavioral analysis of its owner? Time spent hitting the keys of an office computer? Hours spent lounged out in front of the TV? Quantification of almost every quotidian thing might become possible as a consequence of always-on AI — and given the ubiquity of the smartphone (aka the ‘non-wearable wearable’) — but is that actually desirable? Could it not induce feelings of discomfort, stress and demotivation by making ‘users’ (i.e. people) feel they are being microscopically and continuously judged just for how they live? 

The risks around pervasive profiling appear even more crazily dystopian when you look at China’s plan to give every citizen a ‘character score’ — and consider the sorts of intended (and unintended) consequences that could flow from state level control infrastructures powered by the sensor-packed devices in our pockets. 

8)   Content Censorship/Detection

“Restricted content can be automatically detected. Objectionable images, videos or text can be flagged and various notification alarms can be enabled. Computer recognition software can detect any content that violates any laws or policies. For example, taking photos in high security facilities or storing highly classified data on company-paid smartphones will notify IT.”

Personal smartphones that snitch on their users for breaking corporate IT policies sound like something straight out of a sci-fi dystopia. Ditto AI-powered content censorship. There’s a rich and varied (and ever-expanding) tapestry of examples of AI failing to correctly identify, or entirely misclassifying, images — including being fooled by deliberately adulterated graphics  — as well a long history of tech companies misapplying their own policies to disappear from view (or otherwise) certain pieces and categories of content (including really iconic and really natural stuff) — so freely handing control over what we can and cannot see (or do) with our own devices at the UI level to a machine agency that’s ultimately controlled by a commercial entity subject to its own agendas and political pressures would seem ill-advised to say the least. It would also represent a seismic shift in the power dynamic between users and connected devices. 

9) Personal Photographing

“Personal photographing includes smartphones that are able to automatically produce beautified photos based on a user’s individual aesthetic preferences. For example, there are different aesthetic preferences between the East and West — most Chinese people prefer a pale complexion, whereas consumers in the West tend to prefer tan skin tones.”

AI already has a patchy history when it comes to racially offensive ‘beautification’ filters. So any kind of automatic adjustment of skin tones seems equally ill-advised.  Zooming out, this kind of subjective automation is also hideously reductive — fixing users more firmly inside AI-generated filter bubbles by eroding their agency to discover alternative perspectives and aesthetics. What happens to ‘beauty is in the eye of the beholder’ if human eyes are being unwittingly rendered algorithmically color-blind? 

10)    Audio Analytic

“The smartphone’s microphone is able to continuously listen to real-world sounds. AI capability on device is able to tell those sounds, and instruct users or trigger events. For example, a smartphone hears a user snoring, then triggers the user’s wristband to encourage a change in sleeping positions.”

What else might a smartphone microphone that’s continuously listening to the sounds in your bedroom, bathroom, living room, kitchen, car, workplace, garage, hotel room and so on be able to discern and infer about you and your life? And do you really want an external commercial agency determining how best to systemize your existence to such an intimate degree that it has the power to disrupt your sleep? The discrepancy between the ‘problem’ being suggested here (snoring) and the intrusive ‘fix’ (wiretapping coupled with a shock-generating wearable) very firmly underlines the lack of ‘automagic’ involved in AI. On the contrary, the artificial intelligence systems we are currently capable of building require near totalitarian levels of data and/or access to data and yet consumer propositions are only really offering narrow, trivial or incidental utility.

This discrepancy does not trouble the big data-mining businesses that have made it their mission to amass massive data-sets so they can fuel business-critical AI efforts behind the scenes. But for smartphone users asked to sleep beside a personal device that’s actively eavesdropping on bedroom activity, for e.g., the equation starts to look rather more unbalanced. And even if YOU personally don’t mind, what about everyone else around you whose “real-world sounds” will also be being snooped on by your phone, regardless of whether they like it or not. Have you asked them if they want an AI quantifying the noises they make? Are you going to inform everyone you meet that you’re packing a wiretap? 

Read more: https://techcrunch.com/2018/01/06/the-light-and-dark-of-ai-powered-smartphones/

Read More