Check out our innovative home and kitchen tools to make cooking and beverages more enjoyable

Style Switcher

Predefined Colors

Walmart to expand in-store tech, including Pickup Towers for online orders and robots

Walmart is doubling down on its technology innovations in its brick-and-mortar stores in an effort to better compete with Amazon. The retailer today announced the expanded rollout of several technologies — ranging from in-store Pickup Towers to help customers quickly grab their online orders to floor-scrubbing robots. These jobs were, in many cases, previously handled by people instead of machines.

The retailer says it will add to its U.S. stores 1,500 new autonomous floor cleaners, 300 more shelf scanners, 1,200 more FAST Unloaders and 900 new Pickup Towers.

The “Auto-C” floor cleaner is programmed to clean and polish the store’s floor after the area is first prepped by associates. Publicly introduced last fall, the floor cleaner uses assisted autonomy technology to clean the floors instead of having an associate ride a scrubbing machine — a process that today eats up two hours of an employee’s time per day.

Built in partnership with Brain Corp., Walmart said in December it planned to deploy 360 floor-cleaning robots by the end of January 2019. It’s now bumping that rollout to include 1,500 more this year, bringing the total deployment to 1,860.

The Auto-S shelf scanners, meanwhile, have been in testing since 2017, when Walmart rolled out 50 robots to U.S. stores. It’s now adding 300 more to production to reach a total of 350.

These robots are produced by California-based Bossa Nova Robotics, and roll around aisles to scan prices and check inventory. The robots sit in a charging station until given a task by an employee — like checking inventory levels to see what needs restocking, identifying and finding misplaced items or locating incorrect prices or labeling.

In the backroom, Walmart has been testing FAST Unloaders that are capable of unloading a truck of merchandise along a conveyor belt in a fraction of the time it could be done by hand. The machines automatically scan and sort the items based on priority and department to speed up the process and direct items appropriately.

Unloading, the company noted earlier in testing, was also a heavily disliked job — and one it had trouble keeping staffed. Last summer, Walmart said it had 30 unloaders rolled out in the U.S. and was on pace to add 10 more a week.

Now, 1,200 more are being added to stores, bringing the total to 1,700.

The Pickup Towers have also been around since 2017, when they arrived in 200 stores. A sort of vending machine for online orders, the idea is that customers could save on orders by skipping last-mile deliveries, as shipping to a store costs Walmart less. Customers then benefit by getting a better price by not paying for shipping, and could get their items faster.

In April 2018, Walmart rolled out 500 more towers to U.S. stores. It’s now adding 900 more, which will see 1,700 total towers in use across its stores.

The company claims all this tech will free up its employees’ time from focusing on the “more mundane and repetitive tasks” so they can instead serve customers face-to-face.

Of course, that’s what they all say when turning over people’s jobs to robots and automation — whether that’s fancy coffee-making robotic kiosks, burger-flipping robots or restaurants staffed by a concierge but no kitchen help besides machines.

Walmart, however, claims to still have plenty of work for its staff — like picking groceries for its booming online grocery business, for example. Grocery shopping, generally, accounts for more than half its annual sales, and more of that business is shifting online.

The company also said that many of the jobs it automated were those it struggled to find, hire and retain associates to do, and by taking out the routine work, retention has improved.

“What we’re seeing so far suggests investments in store technology are shaping how we think about turnover and hours. The technology is automating pieces of work or tasks, rather than entire jobs,” a Walmart spokesperson said. “As that’s happening, we have been able to use many of the hours being saved in other areas of the store — focused more on service and selling for customers,” they continued.

“We have now added over 40,000 jobs for the online grocery picking role in stores over the last year and a half. These jobs didn’t exist a short time ago. The result so far: we’ve seen our U.S. store associate turnover reduced year-over-year,” the spokesperson added.

The tech announced today will roll out to U.S. stores “soon,” Walmart says, but didn’t provide exact dates.

Read more: https://techcrunch.com/2019/04/09/walmart-to-expand-in-store-tech-including-pickup-towers-for-online-orders-and-robots/

Read More

Not hog dog? PixFood lets you shoot and identify food

What happens when you add AI to food? Surprisingly, you don’t get a hungry robot. Instead you get something like PixFood. PixFood lets you take pictures of food, identify available ingredients, and, at this stage, find out recipes you can make from your larder.

It is privately funded.

“There are tons of recipe apps out there, but all they give you is, well, recipes,” said Tonnesson. “On the other hand, PixFood has the ability to help users get the right recipe for them at that particular moment. There are apps that cover some of the mentioned, but it’s still an exhausting process – since you have to fill in a 50-question quiz so it can understand what you like.”

They launched in August and currently have 3,000 monthly active users from 10,000 downloads. They’re working on perfecting the system for their first users.

“PixFood is AI-driven food app with advanced photo recognition. The user experience is quite simple: it all starts with users taking a photo of any ingredient they would like to cook with, in the kitchen or in the supermarket,” said Tonnesson. “Why did we do it like this? Because it’s personalized. After you take a photo, the app instantly sends you tailored recipe suggestions! At first, they are more or le

ss the same for everyone, but as you continue using it, it starts to learn what you precisely like, by connecting patterns and taking into consideration different behaviors.”

In my rudimentary tests the AI worked acceptably well and did not encourage me to eat a monkey. While the app begs the obvious question – why not just type in “corn?” – it’s an interesting use of vision technology that is definitely a step in the right direction.

Tonnesson expects the AI to start connecting you with other players in the food space, allowing you to order corn (but not a monkey) from a number of providers.

“Users should also expect partnerships with restaurants, grocery, meal-kit, and other food delivery services will be part of the future experiences,” he said.

Read more: https://techcrunch.com/2018/09/10/not-hog-dog-pixfood-lets-you-shoot-and-identify-food/

Read More

Teaching robots to understand their world through basic motor skills

Robots are great at doing what they’re told. But sometimes inputting that information into a system is a far more complex process than the task we’re asking them to execute. That’s part of the reason they’re best suited for simple/repetitive jobs.

A team of researchers at Brown University and MIT is working to develop a system in which robots can plan tasks by developing abstract concepts of real-world objects and ideas based on motor skills. With this system, the robots can perform complex tasks without getting bogged down in the minutia required to complete them.

The researchers programmed a two-armed robot (Anathema Device or “Ana”) to manipulate objects in a room — opening and closing a cupboard and a cooler, flipping on a light switch and picking upa bottle. While performing the tasks, the robot was taking in its surroundings and processing information through algorithms developed by the researchers.

According to the team, the robot was able to learn abstract concepts about the object and the environment. Ana was able to determine that doors need to be closed before they can be opened.

“She learned that the light inside the cupboard was so bright that it whited out her sensors,” the researchers wrote in a release announcing their findings. “So in order to manipulate the bottle inside the cupboard, the light had to be off. She also learned that in order to turn the light off, the cupboard door needed to be closed, because the open door blocked her access to the switch.”

Once processed, the robot associates a symbol with one of these abstract concepts. It’s a sort of common language developed between the robot and human that doesn’t require complex coding to execute. This kind of adaptive quality means the robots could become far more capable of performing a greater variety of tasks in more diverse environments by choosing the actions they need to perform in a given scenario.

“If we want intelligent robots, we can’t write a program for everything we might want them to do,” George Konidaris, a Brown University assistant professor who led the study told TechCrunch. “We have to be able to give them goals and have them generate behavior on their own.”

Of course, asking every robot to learn this way is equally inefficient, but the researchers believe they can develop a common language and create skills that could be download to new hardware.

“I think what will happen in the future is there will be skills libraries, and you can download those,” explains Konidaris. “You can say, ‘I want the skill library for working in the kitchen,’ and that will come with the skill library for doing things in the kitchen.”

Read more: https://techcrunch.com/2018/02/10/teaching-robots-to-understand-their-world-through-basic-motor-skills/

Read More