Check out our innovative home and kitchen tools to make cooking and beverages more enjoyable

Style Switcher

Predefined Colors

Cloud kitchens is an oxymoron

The biggest wave in consumer products right now has all the hallmarks of another bubble of misplaced investor expectations and sadly lower margins.

Cloud kitchens (the category, and not just CloudKitchens the startup service) is essentially WeWork for restaurant kitchens. Instead of buying an expensive restaurant site on a heavily walked street, a cloud kitchen is developed in a cheaper locale (an industrial district, perhaps), with dozens of kitchen stations that are individually rentable for short periods of time by chefs and restaurant proprietors.

It’s a market that has exploded this year. CloudKitchens, which has been funded by former Uber founder and CEO Travis Kalanick, is perhaps the most well-known example, but others are competing, and none more so than meal delivery companies. DoorDash announced that it was opening a shared kitchen in Redwood City just this week, Amazon has announced it is getting in the game and, around the world, companies like India-based transportation network Ola are building out their own shared kitchens.

DoorDash opens a shared kitchen in Redwood City

That has led to laudatory headlines galore. Mike Isaac and David Yaffe-Bellany talk about “the rise of the virtual restaurant” at The New York Times, while Douglas Bell, contributing to Forbes, wrote that “Deliveroo’s Virtual Restaurant Model Will Eat The Food Service Industry.”

And there are not just headlines, but predictions of doom as well for millions of small-business restaurant owners. Mike Moritz, the famed partner at Sequoia, wrote in Financial Times earlier this year that:

The large chain restaurants that operate pick-up locations will be insulated from many of these services, as will the high-end restaurants that offer memorable experiences. But the local trattoria, taqueria, curry shop and sushi bar will be pressed to stay in business.

Latent in these pieces (there are dozens of them published on the web) lies a superficial storyline that’s appealing to the bright but not detail-oriented: that there are high software margins (or “cloud” margins, if you will) to come from a world in which kitchen space is suddenly shareable, and that’s going to lead to a complete disruption of restaurants as we know them.

It’s the same sort of storyline that propelled WeWork to meteoric heights before eventually crashing the last few weeks back down to reality. As Jesse Hempel wrote in Wired a few years ago about the shareable office startup: “Over time, this could be a much bigger opportunity than coworking spaces, one in which everything WeWork has built so far will simply feed an algorithm that will design a perfectly efficient approach to office space.”

Clearly, the AI algorithm for office efficiency (“WeWork Brain”?) wasn’t as profitable as hoped, with WeWork expected to lay off 500 software engineers in the coming weeks.

Report: WeWork expected to cut 500 tech roles

And yet despite the seeming collapse of WeWork and the destruction of its narrative, we still haven’t learned our lesson. As Isaac and Yaffe-Bellany discuss in their NYT piece, “No longer must restaurateurs rent space for a dining room. All they need is a kitchen — or even just part of one.” Now I know what the two mean here, but let’s be uncharitable for a moment: you can’t rent a part of a kitchen. No one rents the stovetop and not the prep area.

But it is that quickly slippery logic that can cause an entire industry to rise and eventually crumble. Just as with the whole “WeWork should really be valued as a software company” meme, the term “cloud kitchens” implies the flexibility (and I guess margins?) of data centers, when in reality, they couldn’t be further away in practice from them. Commercial kitchens require regulatory licenses and inspections, constant monitoring and maintenance, not to mention massive kitchen staffs (they aren’t automated kitchens!).

So let’s look at how margins and leverage play out for the different players. If you are the owner of one of these cloud kitchens, how exactly do you get any pricing leverage in the marketplace? Isaac and Yaffe-Bellany again write, “Diners who order from the apps may have no idea that the restaurant doesn’t physically exist.”

That sounds plausible, but if consumers don’t know where these restaurants physically are, what is stopping an owner from switching its kitchen to another “cloud”? In fact, why not just switch regularly and force a constant bidding war between different clouds? Unlike actual cloud infrastructure, where switching costs are often extremely prohibitive, the switching costs in kitchens seems rather minimal, perhaps as simple as packing up a box or two of ingredients and walking down the street.

That’s why we are seeing almost no innovation coming from early-stage startups in this space. Deliveroo, Uber Eats, DoorDash, Ola and more — let alone Amazon — are hardly under-funded startups.

In fact, this supposed rise of the cloud kitchen gets at the real crux of the matter: the true “expense” of restaurants isn’t rent or labor, but in fact is really marketing: how do you acquire and retain customers in one of the most competitive industries around?

Isaac and Yaffe-Bellany argue that restaurants will join these meal delivery platforms to market their foods. “…[T]hey can hang a shingle inside a meal-delivery app and market their food to the app’s customers, without the hassle and expense of hiring waiters or paying for furniture and tablecloths.”

Let me tell you from the world of media: Relying on other platforms to own your customers on your behalf and wait for “traffic” is a losing proposition, and one that I expect the vast majority of restaurant entrepreneurs to grok pretty quickly.

Instead, it’s the meal delivery companies themselves that will take advantage of this infrastructure, an admission that actually says something provocative about their business models: that they are essentially inter-changeable, and the only way to get margin leverage in the industry is to market and sell their own private-label brands.

For example, I get the same food delivered from the same restaurants regularly, but change the service based on which coupon is best this week (for me, that’s Uber Eats, which offered me $100 if I spent it by Friday). That inter-changeability makes it hard to build a durable, profitable business. Uber Eats, for instance, is expected to be unprofitable for another half decade or more, while Grubhub’s profit margins remain mired in the single digits.

The great hope for these companies is that cloud kitchens can fill the hole in the accounting math. Private brands drive large profits to grocery stores due to their higher margins, and the hope is that an Uber Burger or a DoorDash Pizza might do the same.

The question, of course, is whether consumers “just want food” or whether they specifically want the pad thai from that restaurant down the street they love because it is raining and they don’t want to walk to it. Food brands have a prodigiously long gestation period, since food choices are deeply personal and take time to shift. Just because these meal delivery platforms start offering a burger or a rice bowl doesn’t suddenly mean that consumers are going to flock to those options.

All of which takes us back to those misplaced investor expectations. Cloud kitchens is an interesting concept, and I have no doubt that we will see these sorts of business models for kitchens sprout up across urban cities as an option for some restaurant owners. I’m also sure that there will be at least one digital-only brand that becomes successful and is mentioned in every virtual restaurant article going forward as proof that this model is going to upend the restaurant industry.

But the reality is that none of the players here — not the cloud kitchen owners themselves, not the restaurant owners and not the meal delivery platforms — are going to transform their margin structures with this approach. Cloud kitchens is just adding more competition to one of the most competitive industries in the world, and that isn’t a path to leverage.

Read more: https://techcrunch.com/2019/10/17/cloud-kitchens-is-an-oxymoron/

Read More

Amazon leads $575M investment in Deliveroo

Amazon is taking a slice of Europe’s food delivery market after the U.S. e-commerce giant led a $575 million investment in Deliveroo .

First reported by Sky yesterday, the Series G round was confirmed in an early U.K. morning announcement from Deliveroo, which said that existing backers, including T. Rowe Price, Fidelity Management and Research Company and Greenoaks also took part. The deal takes Deliveroo to just over $1.5 billion raised to date. The company was valued at more than $2 billion following its previous raise in late 2017, although no updated valuation was provided today.

London-based Deliveroo operates in 14 countries, including the U.K., France, Germany and Spain, and — outside of Europe — Singapore, Taiwan, Australia and the UAE. Across those markets, it claims it works with 80,000 restaurants with a fleet of 60,000 delivery people and 2,500 permanent employees.

It isn’t immediately clear how Amazon plans to use its new strategic relationship with Deliveroo — it could, for example, integrate it with Prime membership — but this isn’t the firm’s first dalliance with food delivery. The U.S. firm closed its Amazon Restaurants U.K. takeout business last year after it struggled to compete with Deliveroo and Uber Eats. The service remains operational in the U.S.

“Amazon has been an inspiration to me personally and to the company, and we look forward to working with such a customer-obsessed organization,” said Deliveroo CEO and founder Will Shu in a statement.

Shu said the new money will go toward initiatives that include growing Deliveroo’s London-based engineering team, expanding its reach and focusing on new products, including cloud kitchens that can cook up delivery meals faster and more cost-efficiently.

[Center] Will Shu, Deliveroo CEO and co-founder, onstage at TechCrunch Disrupt London

Read more: https://techcrunch.com/2019/05/16/amazon-takes-a-bite-into-deliveroo/

Read More

How Amazon Taught the Echo Auto to Hear You in a Noisy Car

Dhananjay Motwani is thinking of an animal, and his 20 Questions opponent is, question by question, trying to figure out what it is.

“Is it larger than a microwave oven?”
“Yes.”
“Can it do tricks?”
“Maybe.”
“Is it a predator?”
“No.”
“Is it soft?”
“No.”
“Is it a vegetarian?”
“Yes.”

What’s impressive here isn’t that the questioner is a computer; that’s old hat. It’s that the machine and Motwani are chatting in his blue Hyundai Sonata, trundling along one of Silicon Valley’s many freeways. The traffic, as it tends to be in this part of the country, is bad. The game is a good way not just to pass the time, but to show off what the Echo Auto can do as we creep toward the Sunnyvale lab where Amazon taught it to understand the human voice in the acoustic crucible that is the car.

Amazon introduced the road-going, Alexa-equipped device in September of last year, and started shipping to some customers in January. Amazon is working with some automakers to build Alexa into new cars, but the $50 Auto works with tens of millions of older vehicles already on the road: All you need is a power source (either a USB port or cigarette lighter) and a way to tap into the car’s speakers (Bluetooth or an aux cable).

About the size and shape of a cassette, the Echo Auto sits on your dashboard and brings 70,000 Alexa skills into your car. Its eight built-in microphones let you make phone calls, set reminders, compile shopping lists, find nearby restaurants and coffee shops, and hear Jake Gyllenhaal narrate The Great Gatsby.

An Artificial Head Measurement System with “the acoustically relevant structures of the human anatomy” plays a key role in Amazon’s development of the Echo Auto.

Amazon

Adding the Auto to a growing collection of Echo products makes sense. “There’s no better place for voice than in the car,” says Miriam Daniel, Amazon’s head of Echo products. Your hands are supposed to be on the wheel, your eyes on the road. But when she and her team started developing the thing about 18 months ago, they discovered that there’s no worse place than the car for making voice recognition actually work. “We thought the kitchen was the most challenging acoustic environment,” Daniel says. But family chatter and humming refrigerators proved easy to overcome compared to wind, air conditioning, rain, the radio, and road noise. “The car was like a war zone.”

To safely cross the aural minefield, Daniel’s team started by adapting the Echo’s hardware, software, and user interface to the car. That meant adjusting the device so it can handle being turned on and off frequently, and boot up in a few seconds instead of the minute and a half it took when they first tried it. The team adjusted its responses to be shorter. They added geolocation, so the device can point users to the nearest caffeine injection site. They disabled incoming “Drop Ins,” where approved friends and such can automatically connect to one’s Echo device for a chat.

Daniel’s team created new audio cues and streamlined the potentially distracting activity of the Auto’s LED bar. They gave it one tiny speaker to play the occasional error message, but chose to rely on the car’s audio system to do the heavy lifting, to reduce the Auto’s bulk and cost. They tested a variety of microphone arrays and settled on the dashboard as the best placement after eliminating the cupholder (far from the driver’s mouth and prone to rattling about), clipped onto an air vent (too noisy), and ceiling (would leave wires dangling all over the place).

At Amazon’s reliability lab, the Echo Auto endured climatic chambers, heat and UV exposure, drop tests—just what they sound like—and yank tests, in which a specialized device yanks cords out of the thing with different levels of force. Standard stuff for all Echo devices.

But making sure the Echo can hear you properly in a moving car took a new kind of test. That’s why Motwani, an Alexa product manager, is pondering large, not-soft herbivores while driving me to Amazon’s testing complex in Sunnyvale. The complex contains mocked up kitchens and living rooms, but I’m not allowed to see those. Instead Motwani leads me into a gray room the size of a one-car garage, most of it taken up by a black Honda Accord.

Amazon build a library of road noises by sending drivers into the wild in cars loaded up with microphones, then playing the sound recorded by each at the speaker in the same location.

Amazon

For up to 18 hours on end, the dummy will talk to the Echo Auto sitting on the dash, calling out the same commands and queries over and over again.

Amazon

In the driver’s seat is what looks a bit like the upper bit of a crash test dummy, a head and shoulders mounted on a gray plastic box. The head features a black cross where a human has eyes and a nose, a pill-shaped opening for a mouth, and unsettlingly accurate, molded ears. Its maker, Head Acoustics, calls it an Artificial Head Measurement System with “the acoustically relevant structures of the human anatomy,” and it’s a common tool in audio testing. Also in the Honda are six large speakers, placed throughout the cabin.

Standing by the computers on a table against one wall, Motwani and two of his fellow Amazon engineers decide to start their demonstration at 40 mph, in the rain. A few keystrokes later, the speakers come to life, and the inside of the unmoving, sheltered car becomes an auditory facsimile of what it sounds like to drive through a storm: the pelting rain, the swiping windshield wipers, the engine running, the tires humming against the wet asphalt. They’ve collected these sounds by sending drivers into the wild in cars loaded up with microphones, then playing the sound recorded by each at the speaker in the same location.

From the computer, the engineers show off the other conditions the car can mimic: different speeds, changing weather conditions, windows up or down, talk radio or music blaring. This is where the dummy goes to work, and when I learn why its sole facial feature is a mouth, which is really a speaker. For up to 18 hours on end, it will talk to the Echo Auto sitting on the dash, calling out the same commands and queries over and over again. The team records Alexa’s responses, looking for weak points and misunderstandings. This is how machine learning happens: You feed your system as much data as you can find. And the process works best when that data is carefully selected (or created) to simulate what Alexa will be listening for.

Now that the Echo Auto has shipped to some customers, the garage-lab is focused on improving its performance in extreme conditions like convertibles and rain (though probably not the combination of the two). Like other Alexa products, it will keep getting better, and keep adding skills. But today, at least, it hasn’t bested the human mind: my ride with Motwani ended before it could figure out what animal he was thinking of. It was an elephant.


Read more: https://www.wired.com/story/amazon-echo-auto-engineering/

Read More

Amazon Live is the retailers latest effort to take on QVC with live-streamed video

Amazon is taking on QVC with the launch of Amazon Live, which features live-streamed video shows from Amazon talent as well as those from brands that broadcast their own live streams through a new app, Amazon Live Creator. On the live shows, hosts talk about and demonstrate products available for sale on Amazon, much like they do on QVC. Beneath that sits a carousel where shoppers can browse product details and make purchases.

More than one video streams on Amazon Live at the same time, so shoppers can tune to the one that most interests them.

For example, Amazon Live is currently streaming a Valentine’s Day Gift Shop show, a cooking-focused show (In the Kitchen with @EdenEats) and Back to Business Live, which is showing off products aimed at daycare centers and schools.

You can tap on the different videos to change streams, scroll down to watch recordings of those videos that were recently live or view which live shows are coming up next.

On the web, the live-streaming site is available at Amazon.com/Live, but it’s not listed yet in Amazon’s main navigation menus so it remains hard to find. On mobile, there’s now a section labeled “Amazon Live” that’s appearing on both the iOS and Android app’s main navigation menu as of a recent app update.

We’ve confirmed the page Amazon.com/Live is newly added, though this is not the first time Amazon has offered live streams.

The retailer has dabbled in live streaming in the past, with mixed results.

Two years ago, it pulled the plug on its short-lived effort, Style Code Live, which also offered a QVC-like home shopping experience. The live show featured hosts with TV and broadcast backgrounds, and brought in experts to talk about beauty and style tips.

But Style Code Live focused only on fashion and beauty.

Amazon Live, on the other hand, covers all sorts of products, ranging from smart home to games to toys to kitchen items to home goods to electronics to kitchen items and much more. It’s also positioned differently. Instead of being a single live video show featuring only Amazon talent and guests, live streaming is something Amazon is opening up to brands that want to reach a wider audience and get their products discovered.

Above: Amazon Live hosts – according to LinkedIn, they are not Amazon employees

You may have seen some of these live-streamed videos from brands in the past.

On Prime Day 2017 and again in 2018, Amazon aired live video streams promoting some of the Prime Day deals. These videos were produced by the brands, very much like some you’ll now find on Amazon Live.

The company has also aired live-streamed content on its Today’s Deals page, and has allowed brands to stream to their product pages, their Store and on Amazon.com/Live before today.

Amazon now aims to make it easier for brands to participate on Amazon Live, too.

On a website detailing Amazon Live, Amazon touts how live-streaming video can drive sales, allow a brand to interact with their customers in real time — including through chat during the live stream — and reach more shoppers. One early tester, card game maker “Watch Ya’ Mouth,” is quoted saying that live streaming had helped to increase daily visits to its product detail page by 5x and “significantly grew our sales.”

The informational site also points brands to Amazon’s new app for live streaming, Amazon Live Creator.

Available only on iOS, the app allows a brand to stream its video content directly to Amazon.com on desktop, mobile and within the Amazon mobile app. The app supports streaming directly from the smartphone itself or through an encoder using a professional camera.

It also includes built-in analytics so brands can determine how well their stream performed, including things like how much of their budget they’ve spent on “boosting” (a way to pay to reach more shoppers), total views, unmuted views and other metrics.

According to data from Sensor Tower, Amazon Live Creator was released yesterday, on February 7, 2019, and is currently unranked on the App Store. It has no reviews, but has a five-star rating.

Currently, the live-streaming feature is open to U.S. Professional Sellers registered in the Amazon Brand Registry, Amazon’s website says, and live streaming from China and Hong Kong is not supported.

Amazon has been interested in live streaming for some time. The company patented its idea around live video shopping last year and was spotted hiring for its Amazon Live efforts before that.

However, Amazon had claimed at the time that its live-stream shopping experiences were “not new.”

That’s true, given that live streams that would sometimes appear around big sales, like Prime Day, for instance. But Amazon has promoted its live video directly to online shoppers since Style Code Live.

This week’s launch of the Amazon Live app for brands and Amazon’s move to create a dedicated link to the Amazon Live streams on its mobile app indicates that live video is becoming a much bigger effort for the retailer, despite its attempt to shoo this away as “old news.”

This increased focus on live video also comes at a time when Instagram is being rumored to be working on a standalone shopping app, and is heavily pushing its creator-focused IGTV product into users’ home feeds. QVC itself just announced its new identity, plans to venture deeper into e-commerce, and shoppable video app. And, of course, YouTube has capitalized on how both live and pre-recorded video demos from brands and influencers can help to sell products like makeup, electronics, toys and more.

Amazon formallydeclined to comment.

Read more: https://techcrunch.com/2019/02/08/amazon-live-is-the-retailers-latest-effort-to-take-on-qvc-with-live-streamed-video/

Read More

Why Amazon is putting Alexa in wall clocks and microwaves


A sampling of some of the new Echo devices Amazon just launched.
Image: karissa Bell/mashable

In case there was still any doubt about Amazon’s vision for the smart home, the company just made its intentions clear: it wants to dominate every aspect of your house.

The company revealed a dozen new Alexa-powered gadgets on Thursday, including redesigned Echo speakers, a new subwoofer and amplifier, a wall clock, and, yes, a microwave.

Taking over the smart home

Of these, the $59.99 microwave (officially called the AmazonBasics Microwave) attracted much of the attention because, well, it’s pretty damn random, right? But while some wondered about the usefulness of having Alexa inside your microwave, it also offers the clearest look at how Amazon plans to put Alexa on every surface it possibly can.

So why a microwave? Is it actually faster than just pushing a few buttons? According to Amazon, it opted for the microwave because it’s an appliance that hasn’t changed much in the last few decades. And, more importantly, one that can still be frustratingly complicated. Do you know how to use all the built-in presets on your microwave? I definitely don’t.

Though microwave is Alexa-enabled, it doesn’t have any speakers or microphones built in. Instead, it pairs to a nearby Echo speaker. There is an Alexa button on the microwave, but this is just for saving time; if you push the button on the microwave, you can simply say what preset you want, like “one potato,” without saying “Alexa” or “microwave.”

At launch later this year, Alexa will be able to understand dozens of presets, as well as commands like “add 30 seconds.” Amazon says more commands will be added over time as well.

Strategically, though, the microwave is about much more than making popcorn slightly faster. It’s powered by something called Amazon Connect Kit, which will soon be available to the makers of other kitchen gadgets. This means device makers can make their blenders and coffee makers and mixers compatible with Alexa without having to remake their products with microphones and speakers and custom software.

If you don’t want to wait for manufacturers, though, you’ll have another option: Amazon’s new $24.99 Smart Plug, which lets you control any device you plug into it with your Echo. Think of it as essentially an Alexa-enabled on/off switch. 

Echo Wall Clock.

Image: karissa bell/mashable

The somewhat bulky plug does a few neat things in the background as well. You connect it to your home WiFi network by scanning a barcode on the back of the plug with the Amazon app, which should make setup relatively painless.

Finally, there’s the $29.99 Echo Wall Clock, which is meant to take advantage of what might be the most popular feature on all smart speakers: timers. The clock connects to your Echo speaker and gives you a visual cue to track your timers. 

New and improved Echos

Amazon revamped much of its Echo lineup, with new Echo Dot, Plus, and Show speakers. The good news is that all three are way less ugly than the previous models. The Echo Dot, previously a plastic hockey-puck shaped speaker, has been completely redesigned. The new version now looks a bit like a larger Google Home Mini. It’s rounder, and covered in fabric (available in black or white). 

On the inside, the new Echo Dot has also been engineered to sound louder and clearer. In the brief demos I heard, it did better than the original, though I was in a loud room at the time.

All this also means it’s a bit larger than the original, but it shouldn’t take up much more space. Most importantly, the new Echo Dot is priced the same as the original at $49.99.

The larger $149.99 Echo Plus has also ditched the plastic covering in favor of fabric which, again, makes it look way better and more like a “premium” speaker. It’s also shorter and rounder, making it look more like last year’s Echo 2. On the inside, the Plus has gained a new temperature sensor, so it can detect the temperature of its surroundings, as well as upgraded audio.

The relatively new Echo Show also got a much needed facelift. While the previous version looked like some kind of teleconferencing device, the new Echo Show places the speaker on the side of the device, making it look much less bulky. 

Amazon also delivered its answer to Google’s Chromecast Audio with the $34.99 Echo Input, a thin disc-like gadget you connect to an existing speaker in order to turn it into a smart, Alexa-enabled speaker.

If you’re really serious about upgrading your audio setup, Amazon has offered a solution in the form of the $129.99 Echo Sub. The sub pairs to your existing Echo speakers, which can now be paired in stereo and support multi-room audio. 

In the demo I heard it sounded pretty good by my ear — with a noticeably thumpy bass—  but again, I was in a loud demo room so it’s hard to judge the audio quality at this point. What is clear is that Amazon wants to fight the perception that Echo speakers aren’t meant for people who care about sound quality.

Does all that seem like too much Alexa? Perhaps. But Amazon doesn’t need you to buy all of its products or even most of them. What it is trying to do is make its ecosystem of hardware and software an essential part of the things you do in your home every day, whether it’s listening to music, turning off the lights, or cooking popcorn. 

It’s no secret that the smart home, right now, is kind of a mess. From complicated setup processes to getting a bunch of disparate gadgets to sync up to one another, we’re still a long way off from the cohesive vision so many tech companies have promised us. 

For Amazon, the solution isn’t just to make Alexa smarter and easier to use, it’s to integrate it with every conceivable appliance and gadget you could possibly need or want. Once you’ve bought into one part of the ecosystem, why wouldn’t you keep investing in it? 

Read more: https://mashable.com/article/new-amazon-echo-devices-hands-on/

Read More

Food deliverys untapped opportunity

Investors may have already placed their orders in the consumer food delivery space, but there’s still a missing recipe for solving the more than $250 billion business-to-business foodservice distribution problem that’s begging for venture firms to put more cooks in the kitchen. 

Stock prices for Sysco and US Foods, the two largest food distributors, are up by more than 20 percent since last summer, when Amazon bought Whole Foods. But, these companies haven’t made any material changes to their business model to counteract the threat of Amazon. I know a thing or two about the food services industry and the need for a B2B marketplace in an industry ripe with all of our favorite buzz words: fragmentation, last-mile logistics and a lack of pricing transparency.

The business-to-business food problem

Consumers have it good. Services such as Amazon and Instacart are pushing for our business and attention and thus making it great for the end users. By comparison, food and ingredient delivery for businesses is vastly underserved. The business of foodservice distribution hasn’t gotten nearly as much attention — or capital — as consumer delivery, and the industry is further behind when it comes to serving customers. Food-preparation facilities often face a number of difficulties getting the ingredients to cook the food we all enjoy.

Who are these food-preparation facilities? They range from your local restaurants, hotels, school and business cafeterias, catering companies, and many other facilities that supply to grocery markets, food trucks and so on. This market is gigantic. Ignoring all other facilities, just U.S. restaurants alone earn about $800 billion in annual sales. That’s based on research by the National Restaurant Association (the “other NRA”). Specific to foodservice distribution in the U.S., the estimated 2016 annual sales were a sizable $280 billion.

How it works today

Every one of these food-preparation facilities relies on a number of relationships with distributors (and sometimes, but rarely, directly from farms) to get their necessary ingredients. Some major national players, including Sysco and US Foods, mainly supply “dry goods.” For fresh meats, seafood and produce, plus other artisanal goods, these facilities rely on a large number of local wholesale distributors. A few examples of wholesalers and distributors near where I live in the San Francisco Bay Area are ABS Seafood, Golden Gate Meat Company, Green Leaf, Hodo Soy and VegiWorks.

Keep in mind that the vast majority of these food-prep businesses don’t shop for ingredients the way you and I may shop for ingredients from our local supermarkets or farmer markets. There’s too little margin in food and doing so would be too costly, as well as highly inefficient (e.g. having to pay to send staff out “grocery shopping”). A few small operators do buy ingredients from wholesale chains such as Costco or Restaurant Depot. But in general, it’s way more efficient to place an order with a distributor and get the goods delivered directly to your food-prep facility.

But that’s where the problems lie. These distributors are completely fragmented, and the quality of fresh ingredients varies meaningfully from one distributor to the next. Prices fluctuate constantly, typically on a weekly basis. What’s worse is delivery timeliness, or rather the lack thereof. These distributors each employs their own delivery staff and refrigerated trucks. There is a limited number of 6 am deliveries they can make for a given delivery fleet.

As a food business operator, you may be ordering quality ingredients at the right price, but if the delivery doesn’t show up on time, you’re outta luck. You won’t be able to prepare the food in time, all the while paying for staff who are sitting around waiting for ingredients to arrive.

As a result, you keep getting seemingly random offline pitches with promotions and price breaks from these distributors. But there’s no way to ensure timely delivery. Everybody makes verbal promises and it’s all based on who you know. Things may work for a week or two until you get “deprioritized” by one of the distributors and you have to start the process of finding the next one.

You intentionally rotate among the different distributors, just to keep them “on their toes.”

The opportunity for a food distribution platform

What’s missing is a platform that hosts a catalog of products from these distributors, with updatable availability, pricing and inventory. On it, food businesses could browse for products and place orders. Fulfillment can be done by the distributors at the beginning, but ultimately that operation may need to be done by the platform to maintain consistent quality of service. Reliable fulfillment may end up being the biggest differentiator for such a platform.

I’m aware of startups that have tried to become the dominant B2B platform for food service distribution. But it takes meaningful resources to get to critical mass, and these startups tend to flame out before reaching that point. It’s not necessarily their fault for not being effective.

This industry has low margins, is slow to adopt new technologies and has many incumbent players. But the opportunity to design and execute on this platform is significant, with clear ROI as a reward and a built-in moat once it reaches critical mass.

Food-prep businesses are hungry for a better solution. And as any food entrepreneur knows, hungry customers are the best kind.

Read more: https://techcrunch.com/2018/05/16/food-deliverys-untapped-opportunity/

Read More

Barnes & Noble teeters in a post-text world

Barnes & Noble, that once proud anchor to many a suburban mall, is waning. It is not failing all at once, dropping like the savaged corpse of Toys “R” Us, but it also clear that its cultural moment has passed and only drastic measures can save it from joining Waldenbooks and Borders in the great, paper-smelling ark of our book-buying memory. I’m thinking about this because David Leonhardt at The New York Times calls for B&N to be saved. I doubt it can be.

First, there is the sheer weight of real estate and the inexorable slide away from print. B&N is no longer a place to buy books. It is a toy store with a bathroom and a cafe (and now a restaurant?), a spot where you’re more likely to find Han Solo bobbleheads than a Star Wars novel. The old joy of visiting a bookstore and finding a few magical books to drag home is fast being replicated by smaller bookstores where curation and provenance are still important while B&N pulls more and more titles. To wit:

But does all of this matter? Will the written word — what you’re reading right now — survive the next century? Is there any value in a book when VR and AR and other interfaces can recreate what amounts to the implicit value of writing? Why save B&N if writing is doomed?

Indulge me for a moment and then argue in comments. I’m positing that B&N’s failure is indicative of a move towards a post-text society, that AI and new media will redefine how we consume the world and the fact that we see more videos than text on our Facebook feed – ostensibly the world’s social nervous system – is indicative of this change.

First, some thoughts on writing versus film. In his book of essays, Distrust That Particular Flavor, William Gibson writes about the complexity and education and experience needed to consume various forms of media:

The book has been largely unchanged for centuries. Working in language expressed as a system of marks on a surface, I can induce extremely complex experiences, but only in an audience elaborately educated to experience this. This platform still possesses certain inherent advantages. I can, for instance, render interiority of character with an ease and specificity denied to a screenwriter.

But my audience must be literate, must know what prose fiction is and understand how one accesses it. This requires a complexly cultural education, and a certain socioeconomic basis. Not everyone is afforded the luxury of such an education.

But I remember being taken to my first film, either a Disney animation or a Disney nature documentary (I can’t recall which I saw first), and being overwhelmed by the steep yet almost instantaneous learning curve: In that hour, I learned to watch film.

This is a deeply important idea. First, we must appreciate that writing and film offer various value adds beyond linear storytelling. In the book, the writer can explore the inner space of the character, giving you an imagined world in which people are thinking, not just acting. Film — also a linear medium — offers a visual representation of a story and thoughts are inferred by dint of their humanity. We know a character’s inner life thanks to the emotion we infer from their face and body.

This is why, to a degree, the CGI human was so hard to make. Thanks to books, comics, and film we, as humans, were used to giving animals and enchanted things agency. Steamboat Willie mostly thought like us, we imagined, even though he was a mouse with big round ears. Fast-forward to the dawn of CGI humans — think Sid from Toy Story and his grotesque face — and then fly even further into the future Leia looking out over a space battle and mumbling “Hope” and you see the scope of achievement in CGI humans as well as the deep problems with representing humans digitally. A CGI car named Lightning McQueen acts and thinks like us while a CGI Leia looks slightly off. We cannot associate agency with fake humans, and that’s a problem.

Thus we needed books to give us that inner look, that frisson of discovery that we are missing in real life.

But soon — and we can argue that films like Infinity War prove this — there will be no uncanny valley. We will be unable to tell if a human on screen or in VR is real or fake and this allows for an interesting set of possibilities.

First, with VR and other tricks, we could see through a character’s eyes and even hear her thoughts. This interiority, as Gibson writes, is no longer found in the realm of text and is instead an added attraction to an already rich medium. Imagine hopping from character to character, the reactions and thoughts coming hot and heavy as they move through the action. Maybe the story isn’t linear. Maybe we make it up as we go along. Imagine the remix, the rebuild, the restructuring.

Gibson again:

This spreading, melting, flowing together of what once were distinct and separate media, that’s where I imagine we’re headed. Any linear narrative film, for instance, can serve as the armature for what we would think of as a virtual reality, but which Johnny X, eight-year-old end-point consumer, up the line, thinks of as how he looks at stuff. If he discovers, say, Steve McQueen in The Great Escape, he might idly pause to allow his avatar a freestyle Hong Kong kick-fest with the German guards in the prison camp. Just because he can. Because he’s always been able to. He doesn’t think about these things. He probably doesn’t fully understand that that hasn’t always been possible.

In this case B&N and the bookstore don’t need to exist at all. We get the depth of books with the vitality of film melded with the immersion of gaming. What about artisanal book lovers, you argue, they’ll keep things alive because they love the feel of books.

When that feel — the scent, the heft, the old book smell — can be simulated do we need to visit a bookstore? When Amazon and Netflix spend millions to explore new media and are sure to branch out into more immersive forms do you need to immerse yourself in To The Lighthouse? Do we really need the education we once had to gain in order to read a book?

We know that Amazon doesn’t care about books. They used books as a starting point to taking over e-commerce and, while the Kindle is the best system for e-books in existence, it is an afterthought compared to the rest of the business. In short, the champions of text barely support it.

Ultimately what I posit here depends on a number of changes coming all at once. We must all agree to fall headfirst into some share hallucination the replaces all other media. We must feel that that world is real enough for us to abandon our books.

It’s up to book lovers, then, to decide what they want. They have to support and pay for novels, non-fiction, and news. They have to visit small booksellers and keep demand for books alive. And they have to make it possible to exist as a writer. “Publishers are focusing on big-name writers. The number of professional authors has declined. The disappearance of Borders deprived dozens of communities of their only physical bookstore and led to a drop in book sales that looks permanent,” writes Leonhardt and he’s right. There is no upside for text slingers.

In the end perhaps we can’t save B&N. Maybe we let it collapse into a heap like so many before it. Or maybe we fight for a medium that is quickly losing cachet. Maybe we fight for books and ensure that just because the big guys on the block can’t make a bookstore work the rest of us don’t care. Maybe we tell the world that we just want to read.

I shudder to think what will happen if we don’t.

Read more: https://techcrunch.com/2018/05/07/barnes-noble-teeters-in-a-post-text-world/

Read More

Amazon explains Alexa’s creepy laugh

Image: Amazon/Mashable

Amazon has finally revealed why Alexa is randomly laughing, creeping out Echo users. 

BuzzFeed first reported earlier this week that Amazon Echo users were surprised to hear their devices laughing at random. After confirming the company was working on a fix, Amazon revealed on Wednesday why Alexa was laughing at random.

“In rare circumstances, Alexa can mistakenly hear the phrase ‘Alexa, laugh,'” a spokesperson said in an email. “We are changing that phrase to be ‘Alexa, can you laugh?’ which is less likely to have false positives, and we are disabling the short utterance ‘Alexa, laugh.’ We are also changing Alexa’s response from simply laughter to ‘Sure, I can laugh’ followed by laughter.”

Amazon says the fix has already been rolled out.

This confirms the theory that Alexa was falsely triggered and not possessed. While it’s promising the company issued a fix, that probably isn’t enough to comfort users who allegedly heard Alexa laughing without a sound or in the middle of the night.

Here are a few examples people managed to capture when they asked Alexa to repeat the last thing she said, just so you can hear just how creepy it is. 

Read more: https://mashable.com/2018/03/07/why-amazon-alexa-laughing/

Read More

Crack the code on Amazon Web Services


Start earning from where you keep spending.

Heads up: All products featured here are selected by Mashable’s commerce team and meet our rigorous standards for awesomeness. If you buy something, Mashable may earn an affiliate commission.

Amazon Web Services (AWS) is a subsidiary of Amazon that offers cloud computing platforms through paid subscriptions, so you can host your important files and website information. If you’ve never heard of AWS, you may be shocked to learn it’s a $10 billion per year business and continues to grow. In fact, it’s so powerful, it’s been known to have the ability to break the internet.

That means people who know how to use AWS are in demand right now, and you can get in on the action too with this Amazon Web Services Certification Training Mega Bundle, on sale for just $69. Anywhere else, the courses in this bundle would cost about $1300, so you’re getting a 94% discount if you enroll today.

Here are the courses included in this massive bundle: 

AWS Technical Essentials Certification Training

This introductory course follows an AWS syllabus to help train you on all the products, services, and solutions that the platform offers. If you can’t tell an S3 bucket from an instance, this course will teach you all the terminology you’ll need to know in an easily digestible way through seven hours of e-learning content and two live projects. 

Introduction to Amazon S3 Training Course

Amazon S3 is one of the many services AWS offers and it provides storage through various interfaces. With this second course, you’ll get an overview of S3 and learn how it integrates with CloudFront and Import/Export services. Once you know how to manage and encrypt S3 files with the tools this course outlines, you’ll be able to dive further into your AWS training.

Introduction to Amazon Route 53 Training

Route 54 is a scalable, fast, and cost-effective way to connect your users to your infrastructure. In this course, you’ll learn about the Amazon DNS service so you can host your own domain names. It’s a quick lesson, and by the end, you’ll understand Route 53’s basics.

Introduction to Amazon EC2 Training Course

EC2, or Elastic Compute Cloud, is one of the more interesting services AWS offers, allowing users to rent virtual computers to run applications, meaning they don’t need to invest in hardware. Learn all about EC2’s best practices and costs in this course, and by the end of it, you’ll create an Amazon Machine Image. While it may sound complicated, the course breaks it down into a self-paced two-hour lesson.

AWS Solution Architect Certification Training Course

After these first few courses, you’ll have a pretty basic understanding of how AWS works and how you can implement it efficiently and affordably. The AWS Solution Architect Certification Training Course brings everything you’ve learned together so you can have an in-depth understanding of AWS’s architectural principles and services. You may be surprised to know that AWS-certified solution architects make, on average, $126,000 a year so these are lucrative skills to know.

Amazon VPC Training Course

This next course in the bundle goes a little deeper than the introductory ones and will help you understand the basic concepts of Amazon Virtual Private Cloud, which is what actually creates the cloud-based networks AWS offers. By course’s end, you’ll learn about subnets, internet gateways, route tables, NAT devices, security groups, and much more. Even better, the VPC training course will show you how to implement these methodologies in real-life scenarios.

AWS Lambda Training Course

This is one of the more technical courses in the bundle, which covers AWS Lambda, a serverless computing platform. If you like coding, this one will be of interest to you, as it teaches you how to deploy Python, NodeJS, and Java codes to Lambda functions. Lambda integrates with all of the other AWS services covered in this bundle, like S3, so you’ll finally be able to bring all of your knowledge together. 

AWS Database Migration Service Course

The last course in the bundle will help you learn how to easily migrate databases to the AWS cloud so that once you know how its services work, you can start actually using it. By the end of the course, you’ll have the skills you need to land a job working with AWS and your resume will thank you.

Read more: https://mashable.com/2018/02/01/amazon-web-services-certification-sale/

Read More

Inside Amazon’s Artificial Intelligence Flywheel

In early 2014, Srikanth Thirumalai met with Amazon CEO Jeff Bezos. Thirumalai, a computer scientist who’d left IBM in 2005 to head Amazon’s recommendations team, had come to propose a sweeping new plan for incorporating the latest advances in artificial intelligence into his division.

He arrived armed with a “six-pager.” Bezos had long ago decreed that products and services proposed to him must be limited to that length, and include a speculative press release describing the finished product, service, or initiative. Now Bezos was leaning on his deputies to transform the company into an AI powerhouse. Amazon’s product recommendations had been infused with AI since the company’s very early days, as had areas as disparate as its shipping schedules and the robots zipping around its warehouses. But in recent years, there has been a revolution in the field; machine learning has become much more effective, especially in a supercharged form known as deep learning. It has led to dramatic gains in computer vision, speech, and natural language processing.

In the early part of this decade, Amazon had yet to significantly tap these advances, but it recognized the need was urgent. This era’s most critical competition would be in AI—Google, Facebook, Apple, and Microsoft were betting their companies on it—and Amazon was falling behind. “We went out to every [team] leader, to basically say, ‘How can you use these techniques and embed them into your own businesses?’” says David Limp, Amazon’s VP of devices and services.

Thirumalai took that to heart, and came to Bezos for his annual planning meeting with ideas on how to be more aggressive in machine learning. But he felt it might be too risky to wholly rebuild the existing system, fine-tuned over 20 years, with machine-learning techniques that worked best in the unrelated domains of image and voice recognition. “No one had really applied deep learning to the recommendations problem and blown us away with amazingly better results,” he says. “So it required a leap of faith on our part.” Thirumalai wasn’t quite ready—but Bezos wanted more. So Thirumalai shared his edgier option of using deep learning to revamp the way recommendations worked. It would require skills that his team didn’t possess, tools that hadn’t been created, and algorithms that no one had thought of yet. Bezos loved it (though it isn’t clear whether he greeted it with his trademark hyena-esque laugh), so Thirumalai rewrote his press release and went to work.

Srikanth Thirumalai, VP of Amazon Search, was among the leaders tasked with overhauling Amazon’s software with advanced machine learning.

Ian C. Bates

Thirumalai was only one of a procession of company leaders who trekked to Bezos a few years ago with six-pagers in hand. The ideas they proposed involved completely different products with different sets of customers. But each essentially envisioned a variation of Thirumalai’s approach: transforming part of Amazon with advanced machine learning. Some of them involved rethinking current projects, like the company’s robotics efforts and its huge data-center business, Amazon Web Services (AWS). Others would create entirely new businesses, like a voice-based home appliance that would become the Echo.

The results have had an impact far beyond the individual projects. Thirumalai says that at the time of his meeting, Amazon’s AI talent was segregated into isolated pockets. “We would talk, we would have conversations, but we wouldn’t share a lot of artifacts with each other because the lessons were not easily or directly transferable,” he says. They were AI islands in a vast engineering ocean. The push to overhaul the company with machine learning changed that.

While each of those six-pagers hewed to Amazon’s religion of “single-threaded” teams—meaning that only one group “owns” the technology it uses—people started to collaborate across projects. In-house scientists took on hard problems and shared their solutions with other groups. Across the company, AI islands became connected. As Amazon's ambition for its AI projects grew, the complexity of its challenges became a magnet for top talent, especially those who wanted to see the immediate impact of their work. This compensated for Amazon's aversion to conducting pure research; the company culture demanded that innovations come solely in the context of serving its customers.

Amazon loves to use the word flywheel to describe how various parts of its massive business work as a single perpetual motion machine. It now has a powerful AI flywheel, where machine-learning innovations in one part of the company fuel the efforts of other teams, who in turn can build products or offer services to affect other groups, or even the company at large. Offering its machine-learning platforms to outsiders as a paid service makes the effort itself profitable—and in certain cases scoops up yet more data to level up the technology even more.

It took a lot of six-pagers to transform Amazon from a deep-learning wannabe into a formidable power. The results of this transformation can be seen throughout the company—including in a recommendations system that now runs on a totally new machine-learning infrastructure. Amazon is smarter in suggesting what you should read next, what items you should add to your shopping list, and what movie you might want to watch tonight. And this year Thirumalai started a new job, heading Amazon search, where he intends to use deep learning in every aspect of the service.

“If you asked me seven or eight years ago how big a force Amazon was in AI, I would have said, ‘They aren’t,’” says Pedro Domingos, a top computer science professor at the University of Washington. “But they have really come on aggressively. Now they are becoming a force.”

Maybe the force.

The Alexa Effect

The flagship product of Amazon’s push into AI is its breakaway smart speaker, the Echo, and the Alexa voice platform that powers it. These projects also sprang from a six-pager, delivered to Bezos in 2011 for an annual planning process called Operational Plan One. One person involved was an executive named Al Lindsay, an Amazonian since 2004, who had been asked to move from his post heading the Prime tech team to help with something totally new. “A low-cost, ubiquitous computer with all its brains in the cloud that you could interact with over voice—you speak to it, it speaks to you,” is how he recalls the vision being described to him.

But building that system—literally an attempt to realize a piece of science fiction, the chatty computer from Star Trek—required a level of artificial intelligence prowess that the company did not have on hand. Worse, of the very few experts who could build such a system, even fewer wanted to work for Amazon. Google and Facebook were snapping up the top talent in the field. “We were the underdog,” Lindsay, who is now a VP, says.

Al Lindsay, the VP of Amazon Alexa Engine, says Amazon was the underdog when trying to recruit AI experts to design and build its voice platform.

Ian C. Bates

“Amazon had a bit of a bad image, not friendly to people who were research oriented,” says Domingos, the University of Washington professor. The company’s relentless focus on the customer, and its culture of scrappiness, did not jibe with the pace of academia or cushy perks of competitors. “At Google you’re pampered,” Domingos says. “At Amazon you set up your computer from parts in the closet.” Worse, Amazon had a reputation as a place where innovative work was kept under corporate wraps. In 2014, one of the top machine-learning specialists, Yann LeCun, gave a guest lecture to Amazon’s scientists in an internal gathering. Between the time he was invited and the event itself, LeCun accepted a job to lead Facebook’s research effort, but he came anyway. As he describes it now, he gave his talk in an auditorium of about 600 people and then was ushered into a conference room where small groups came in one by one and posed questions to him. But when he asked questions of them, they were unresponsive. This turned off LeCun, who had chosen Facebook in part because it agreed to open-source much of the work of its AI team.

Because Amazon didn’t have the talent in-house, it used its deep pockets to buy companies with expertise. “In the early days of Alexa, we bought many companies,” Limp says. In September 2011, it snapped up Yap, a speech-to-text company with expertise in translating the spoken word into written language. In January 2012, Amazon bought Evi, a Cambridge, UK, AI company whose software could respond to spoken requests like Siri does. And in January 2013, it bought Ivona, a Polish company specializing in text-to-speech, which provided technology that enabled Echo to talk.

But Amazon’s culture of secrecy hampered its efforts to attract top talent from academia. One potential recruit was Alex Smola, a superstar in the field who had worked at Yahoo and Google. “He is literally one of the godfathers of deep learning,” says Matt Wood, the general manager of deep learning and AI at Amazon Web Services. (Google Scholar lists more than 90,000 citations of Smola's work.) Amazon execs wouldn’t even reveal to him or other candidates what they would be working on. Smola rejected the offer, choosing instead to head a lab at Carnegie Mellon.

Director of Alexa Ruhi Sarikaya and VP of Amazon Alexa Engine Al Lindsay led an effort to create not only the Echo line of smart speakers, but also a voice service that could work with other company products.

Ian C. Bates

“Even until right before we launched there was a headwind,” Lindsay says. “They would say, ‘Why would I want to work at Amazon—I’m not interested in selling people products!’”

Amazon did have one thing going for it. Since the company works backward from an imagined final product (thus the fanciful press releases), the blueprints can include features that haven’t been invented yet. Such hard problems are irresistible to ambitious scientists. The voice effort in particular demanded a level of conversational AI—nailing the “wake word” (“Hey Alexa!”), hearing and interpreting commands, delivering non-absurd answers—that did not exist.

That project, even without the specifics on what Amazon was building, helped attract Rohit Prasad, a respected speech-recognition scientist at Boston-based tech contractor Raytheon BBN. (It helped that Amazon let him build a team in his hometown.) He saw Amazon’s lack of expertise as a feature, not a bug. “It was green fields here,” he says. “Google and Microsoft had been working on speech for years. At Amazon we could build from scratch and solve hard problems.” As soon as he joined in 2013, he was sent to the Alexa project. “The device existed in terms of the hardware, but it was very early in speech,” he says.

The trickiest part of the Echo—the problem that forced Amazon to break new ground and in the process lift its machine-learning game in general—was something called far field speech recognition. It involves interpreting voice commands spoken some distance from the microphones, even when they are polluted with ambient noise or other aural detritus. One challenging factor was that the device couldn’t waste any time cogitating about what you said. It had to send the audio to the cloud and produce an answer quickly enough that it felt like a conversation, and not like those awkward moments when you’re not sure if the person you’re talking to is still breathing. Building a machine-learning system that could understand and respond to conversational queries in noisy conditions required massive amounts of data—lots of examples of the kinds of interactions people would have with their Echos. It wasn’t obvious where Amazon might get such data.

Various Amazon devices and third-party products now use the Alexa voice service. Data collected through Alexa helps improve the system and supercharges Amazon’s broader AI efforts.

Ian C. Bates

Far-field technology had been done before, says Limp, the VP of devices and services. But “it was on the nose cone of Trident submarines, and it cost a billion dollars.” Amazon was trying to implement it in a device that would sit on a kitchen counter, and it had to be cheap enough for consumers to spring for a weird new gadget. “Nine out of 10 people on my team thought it couldn’t be done,” Prasad says. “We had a technology advisory committee of luminaries outside Amazon—we didn’t tell them what we were working on, but they said, ‘Whatever you do, don’t work on far field recognition!’”

Prasad’s experience gave him confidence that it could be done. But Amazon did not have an industrial-strength system in place for applying machine learning to product development. “We had a few scientists looking at deep learning, but we didn’t have the infrastructure that could make it production-ready,” he says. The good news was that all the pieces were there at Amazon—an unparalleled cloud service, data centers loaded with GPUs to crunch machine-learning algorithms, and engineers who knew how to move data around like fireballs.

His team used those parts to create a platform that was itself a valuable asset, beyond its use in fulfilling the Echo’s mission. “Once we developed Echo as a far-field speech recognition device, we saw the opportunity to do something bigger—we could expand the scope of Alexa to a voice service,” says Alexa senior principal scientist Spyros Matsoukas, who had worked with Prasad at Raytheon BBN. (His work there had included a little-known Darpa project called Hub4, which used broadcast news shows and intercepted phone conversations to advance voice recognition and natural language understanding—great training for the Alexa project.) One immediate way they extended Alexa was to allow third-party developers to create their own voice-technology mini-applications—dubbed “skills”—to run on the Echo itself. But that was only the beginning.

Spyros​ ​Matsoukas​, a senior principal scientist at Amazon, helped turn Alexa into a force for strengthening Amazon’s company-wide culture around AI.

Adam Glanzman

By breaking out Alexa beyond the Echo, the company’s AI culture started to coalesce. Teams across the company began to realize that Alexa could be a useful voice service for their pet projects too. “So all that data and technology comes together, even though we are very big on single-threaded ownership,” Prasad says. First other Amazon products began integrating into Alexa: When you speak into your Alexa device you can access Amazon Music, Prime Video, your personal recommendations from the main shopping website, and other services. Then the technology began hopscotching through other Amazon domains. “Once we had the foundational speech capacity, we were able to bring it to non-Alexa products like Fire TV, voice shopping, the Dash wand for Amazon fresh, and, ultimately, AWS,” Lindsay says.

The AI islands within Amazon were drawing closer.

Another pivotal piece of the company’s transformation clicked into place once millions of customers (Amazon won’t say exactly how many) began using the Echo and the family of other Alexa-powered devices. Amazon started amassing a wealth of data—quite possibly the biggest collection of interactions of any conversation-driven device ever. That data became a powerful lure for potential hires. Suddenly, Amazon rocketed up the list of places where those coveted machine-learning experts might want to work. “One of the things that made Alexa so attractive to me is that once you have a device in the market, you have the resource of feedback. Not only the customer feedback, but the actual data that is so fundamental to improving everything—especially the underlying platform,” says Ravi Jain, an Alexa VP of machine learning who joined the company last year.

So as more people used Alexa, Amazon got information that not only made that system perform better but supercharged its own machine-learning tools and platforms—and made the company a hotter destination for machine-learning scientists.

The flywheel was starting to spin.

A Brainier Cloud

Amazon began selling Echo to Prime customers in 2014. That was also the year that Swami Sivasubramanian became fascinated with machine learning. Sivasubramanian, who was managing the AWS database and analytics business at the time, was on a family trip to India, when due to a combination of jet lag and a cranky infant daughter, he found himself at his computer late into the night fiddling with tools like Google’s Tensorflow and Caffé, which is the machine-learning framework favored by Facebook and many in the academic community. He concluded that combining these tools with Amazon’s cloud service could yield tremendous value. By making it easy to run machine-learning algorithms in the cloud, he thought, the company might tap into a vein of latent demand. “We cater to millions of developers every month,” he says. “The majority are not professors at MIT but developers who have no background in machine learning.”

Swami Sivasubramanian, VP of AI, AWS was among the first to realize the business implications of integrating AI tools into the company’s cloud services.

Ian C. Bates

At his next Jeff Bezos review he came armed with an epic six-pager. On one level, it was a blueprint for adding machine-learning services to AWS. But Sivasubramanian saw it as something broader: a grand vision of how AWS could become the throbbing center of machine-learning activity throughout all of techdom.

In a sense, offering machine learning to the tens of thousands of Amazon cloud customers was inevitable. “When we first put together the original business plan for AWS, the mission was to take technology that was only in reach of a small number of well-funded organizations and make it as broadly distributed as possible,” says Wood, the AWS machine-learning manager. “We’ve done that successfully with computing, storage, analytics, and databases—and we’re taking the exact same approach with machine learning.” What made it easier was that the AWS team could draw on the experience that the rest of the company was accumulating.

AWS’s Amazon Machine Learning, first offered in 2015, allows customers like C-Span to set up a private catalog of faces, Wood says. Zillow uses it to estimate house prices. Pinterest employs it for visual search. And several autonomous driving startups are using AWS machine learning to improve products via millions of miles of simulated road testing.

In 2016, AWS released new machine-learning services that more directly drew on the innovations from Alexa—a text-to-speech component called Polly and a natural language processing engine called Lex. These offerings allowed AWS customers, which span from giants like Pinterest and Netflix to tiny startups, to build their own mini Alexas. A third service involving vision, Rekognition, drew on work that had been done in Prime Photos, a relatively obscure group at Amazon that was trying to perform the same deep-learning wizardry found in photo products by Google, Facebook, and Apple.

These machine-learning services are both a powerful revenue generator and key to Amazon’s AI flywheel, as customers as disparate as NASA and the NFL are paying to get their machine learning from Amazon. As companies build their vital machine-learning tools inside AWS, the likelihood that they will move to competing cloud operations becomes ridiculously remote. (Sorry, Google, Microsoft, or IBM.) Consider Infor, a multibillion-dollar company that creates business applications for corporate customers. It recently released an extensive new application called Coleman (named after the NASA mathematician in Hidden Figures) that allows its customers to automate various processes, analyze performance, and interact with data all through a conversational interface. Instead of building its own bot from scratch, it uses AWS’s Lex technology. “Amazon is doing it anyway, so why would we spend time on that? We know our customers and we can make it applicable to them,” says Massimo Capoccia, a senior VP of Infor.

AWS’s dominant role in the ether also gives it a strategic advantage over competitors, notably Google, which had hoped to use its machine-learning leadership to catch up with AWS in cloud computing. Yes, Google may offer customers super-fast, machine-learning-optimized chips on its servers. But companies on AWS can more easily interact with—and sell to—firms that are also on the service. “It’s like Willie Sutton saying he robs banks because that’s where the money is,” says DigitalGlobe CTO Walter Scott about why his firm uses Amazon’s technology. “We use AWS for machine learning because that’s where our customers are.”

Last November at the AWS re:Invent conference, Amazon unveiled a more comprehensive machine-learning prosthetic for its customers: SageMaker, a sophisticated but super easy-to-use platform. One of its creators is none other than Alex Smola, the machine-learning superstar with 90,000 academic citations who spurned Amazon five years ago. When Smola decided to return to industry, he wanted to help create powerful tools that would make machine learning accessible to everyday software developers. So he went to the place where he felt he’d make the biggest impact. “Amazon was just too good to pass up,” he says. “You can write a paper about something, but if you don’t build it, nobody will use your beautiful algorithm,” he says.

When Smola told Sivasubramanian that building tools to spread machine learning to millions of people was more important than publishing one more paper, he got a nice surprise. “You can publish your paper, too!” Sivasubramanian said. Yes, Amazon is now more liberal in permitting its scientists to publish. “It’s helped quite a bit with recruiting top talent as well as providing visibility of what type of research is happening at Amazon,” says Spyros Matsoukas, who helped set guidelines for a more open stance.

It’s too early to know if the bulk of AWS’s million-plus customers will begin using SageMaker to build machine learning into their products. But every customer that does will find itself heavily invested in Amazon as its machine-learning provider. In addition, the platform is sufficiently sophisticated that even AI groups within Amazon, including the Alexa team, say they intend to become SageMaker customers, using the same toolset offered to outsiders. They believe it will save them a lot of work by setting a foundation for their projects, freeing them to concentrate on the fancier algorithmic tasks.

Even if only some of AWS’s customers use SageMaker, Amazon will find itself with an abundance of data about how its systems perform (excluding, of course, confidential information that customers keep to themselves). Which will lead to better algorithms. And better platforms. And more customers. The flywheel is working overtime.

AI Everywhere

With its machine learning overhaul in place, the company’s AI expertise is now distributed across its many teams—much to the satisfaction of Bezos and his consiglieri. While there is no central office of AI at Amazon, there is a unit dedicated to the spread and support of machine learning, as well as some applied research to push new science into the company’s projects. The Core Machine Learning Group is led by Ralf Herbrich, who worked on the Bing team at Microsoft and then served a year at Facebook, before Amazon lured him in 2012. “It’s important that there’s a place that owns this community” within the company, he says. (Naturally, the mission of the team was outlined in an aspirational six-pager approved by Bezos.)

Part of his duties include nurturing Amazon’s fast-growing machine-learning culture. Because of the company’s customer-centric approach—solving problems rather than doing blue-sky research—Amazon execs do concede that their recruiting efforts will always tilt towards those interested in building things rather than those chasing scientific breakthroughs. Facebook’s LeCun puts it another way: “You can do quite well by not leading the intellectual vanguard.”

But Amazon is following Facebook and Google’s lead in training its workforce to become adept at AI. It runs internal courses on machine-learning tactics. It hosts a series of talks from its in-house experts. And starting in 2013, the company has hosted an internal machine-learning conference at its headquarters every spring, a kind of Amazon-only version of NIPS, the premier academic machine-learning-palooza. “When I started, the Amazon machine-learning conference was just a couple hundred people; now it’s in the thousands,” Herbrich says. “We don’t have the capacity in the largest meeting room in Seattle, so we hold it there and stream it to six other meeting rooms on the campus.” One Amazon exec remarks that if it gets any bigger, instead of calling it an Amazon machine-learning event, it should just be called Amazon.

Herbrich’s group continues to push machine learning into everything the company attempts. For example, the fulfillment teams wanted to better predict which of the eight possible box sizes it should use with a customer order, so they turned to Herbrich’s team for help. “That group doesn’t need its own science team, but it needed these algorithms and needed to be able to use them easily,” he says. In another example, David Limp points to a transformation in how Amazon predicts how many customers might buy a new product. “I’ve been in consumer electronics for 30 years now, and for 25 of those forecasting was done with [human] judgment, a spreadsheet, and some Velcro balls and darts,” he says. “Our error rates are significantly down since we’ve started using machine learning in our forecasts.”

Still, sometimes Herbrich’s team will apply cutting-edge science to a problem. Amazon Fresh, the company’s grocery delivery service, has been operating for a decade, but it needed a better way to assess the quality of fruits and vegetables—humans were too slow and inconsistent. His Berlin-based team built sensor-laden hardware and new algorithms that compensated for the inability of the system to touch and smell the food. “After three years, we have a prototype phase, where we can judge the quality more reliably” than before, he says.

Of course, such advances can then percolate throughout the Amazon ecosystem. Take Amazon Go, the deep-learning-powered cashier-less grocery store in its headquarters building that recently opened to the public. “As a customer of AWS, we benefit from its scale,” says Dilip Kumar, VP of Technology for Amazon Go. “But AWS is also a beneficiary.” He cites as an example Amazon Go’s unique system of streaming data from hundreds of cameras to track the shopping activities of customers. The innovations his team concocted helped influence an AWS service called Kinesis, which allows customers to stream video from multiple devices to the Amazon cloud, where they can process it, analyze it, and use it to further advance their machine learning efforts.

Even when an Amazon service doesn’t yet use the company’s machine-learning platform, it can be an active participant in the process. Amazon’s Prime Air drone-delivery service, still in the prototype phase, has to build its AI separately because its autonomous drones can’t count on cloud connectivity. But it still benefits hugely from the flywheel, both in drawing on knowledge from the rest of the company and figuring out what tools to use. “We think about this as a menu—everybody is sharing what dishes they have,” says Gur Kimchi, VP of Prime Air. He anticipates that his team will eventually have tasty menu offerings of its own. “The lessons we’re learning and problems we’re solving in Prime Air are definitely of interest to other parts of Amazon,” he says.

In fact, it already seems to be happening. “If somebody’s looking at an image in one part of the company, like Prime Air or Amazon Go, and they learn something and create an algorithm, they talk about it with other people in the company,” says Beth Marcus, a principal scientist at Amazon robotics. “And so someone in my team could use it to, say, figure out what’s in an image of a product moving through the fulfillment center.”

Beth Marcus​, senior principal technologist at Amazon Robotics, has seen the benefits of collaborating with the company’s growing pool of AI experts.

Adam Glanzman

Is it possible for a company with a product-centered approach to eclipse the efforts of competitors staffed with the superstars of deep learning? Amazon’s making a case for it. “Despite the fact they’re playing catchup, their product releases have been incredibly impressive,” says Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence. “They’re a world-class company and they’ve created world-class AI products.”

The flywheel keeps spinning, and we haven’t seen the impact of a lot of six-pager proposals still in the pipeline. More data. More customers. Better platforms. More talent.

Alexa, how is Amazon doing in AI?

The answer? Jeff Bezos’s braying laugh.

Read more: https://www.wired.com/story/amazon-artificial-intelligence-flywheel/

Read More

Amazon adds loads more branded Dash buttons in UK

Amazon has doubled the total selection of branded Dash buttons available to UK members of its Prime subscription service, to more than 100, just over a year after launching the push-button wi-fi gizmos, which let people reorder a specific product via its ecommerce marketplace just by pushing the button.

The first Dash buttons launched in the UK in August last year. Amazon now says Dash Button orders have delivered more than 160,000 cups of coffee and almost 300,000 rolls of toilet tissue paper in the market.

Although it’s not — in typical Amazon fashion — breaking out any hard metrics for the buttons, which cost £4.99 a piece (though users then get a £4.99 discount on their first Dash push order — so sticking these things all over your white goods comes with essentially zero additional cost, assuming you’re already locked into Amazon’s Prime membership program).

Reordering toilet roll is the most popular Dash push for UK users, according to the ecommerce giant. Followed by dishwasher tablets, cat litter, cat food, beer, mouthwash and baby wipes. So most definitely this gadget is one to file under ‘utility & convenience’ (not ‘shiny & sexy’).

Among the new brands willingly sticking themselves on Dash buttons are Bold, Cillit Bang, English Tea Shop, evian, Febreze, Flash, Gaviscon, Harringtons, Head & Shoulders, Pampers, Purina Gourmet, SMA, Tampax, Vet’s Best and Waterwipes.

The full list of new (and existing) UK Dash buttons can be found here.

For fast moving consumer goods brands, which inevitably have stacks of similarly priced rival products vying to catch consumers’ eyes on shop shelves, the chance to peel away and monopolize consumers’ attention in their own homes is clearly the equivalent of catnip.

Add in the fact Dash also reduces friction for repeat orders of their product and, well, there’s really no down side as far as the brands are concerned. Dash buttons for every kind of staple seems inevitable — at least until some kind of instant reordering gets integrated into products themselves.

Until then an unknown number of Brits are apparently comfortable pebble-dashing their homes with stick-on buttons. Or at least happy to put a Dash button for reordering bog roll somewhere near the toilet (hopefully in close proximity to soap and hot water).

Read more: https://techcrunch.com/2017/10/18/amazon-adds-loads-more-branded-dash-buttons-in-uk/

Read More

The Echo Spot is the best Echo

The message of today’s big Amazon event was pretty clear: Echos for everyone, for every need in every room of every home. The company clearly has no desire to create one device to rule them all. Instead, it’s building out micro functionality, with every product designed to target different needs for different users. 

While I’m still working through all of the details from today’s announcement (and probably will be at least until next week’s Google event), one Echo clearly pretty obviously stands head and shoulders above the rest. It was clear from the moment the company announced the Spot that it was the most exciting of the bunch. The new device takes the lessons learned from the company’s best-selling Echo Dot and applies them to its formerly most compelling product, the Echo Show.

The Spot is, in essence, a cross-pollination of the two products, as its name implies (Show + Dot = Spot). It offers all the basic functionality of the Show and applies it to a much smaller form factor, at a far more affordable price point ($130). Of course, the company had to make some sacrifices to get there — the screen is much smaller, at 2.5 inches, and the on-board speakers suck (though, like the Dot, there is audio out here).

But along with those sacrifices comes a device that’s capable of fitting into far more spots in the home (pun possibly intended, I’m not really sure anymore) and serving a number of interesting new purposes. It’s no mistake that Amazon led with alarm clock functionality. In the time I spent with the Show, the morning was far and away the time I most engaged with the product — morning news, weather, traffic and the like are great when you’re on your way out the door.

The Spot is the product Chumby was trying to be, but one built with better technology and with a far more robust set of skills that make it an ideal bedside companion (though maybe put some tape over the camera while you sleep, because yuck). Sorry Chumby, the world just wasn’t ready for you. The Spot also features things like security camera compatibility, which make it ideal for other spots, like the kitchen.

  1. amazon-event-9270074

  2. amazon-event-9270067

  3. amazon-event-9270069

  4. amazon-event-9270066

  5. amazon-event-9270064

  6. echospot

The device takes the best bits from the best Echos, and the result is the most interesting of the bunch. The new Echo is certainly an improvement over its predecessor, but honestly, I don’t really spend a lot of time with Amazon Music. The Echo Plus, meanwhile, seems targeted toward users who want connected devices, but don’t know how to go about doing the connecting. And the Echo Button — well, they’re kind of the In Through the Out Door of the Echo catalog.

The Spot’s December release date is no coincidence. This thing is going to be a big holiday seller. And frankly, I wouldn’t be surprise if it topped the Echo sales charts at the end of the year.

Read more: https://techcrunch.com/2017/09/27/the-echo-spot-is-the-best-echo/

Read More