Forward brings its personalized healthcare service to Los Angeles

Forward, the San Francisco-based startup that’s looking to refashion healthcare services in Apple’s image, is expanding with its first location in Los Angeles.

Weaving together a number of Silicon Valley’s favorite healthcare trends,  the company’s services combine proprietary, purpose-built medical devices with algorithmically enabled diagnostic tools, and the latest in gene, bacteria, and blood tests to provide a holistic view of its patients’ health.

These technologies and services including: unlimited access to its medical staff; baseline screening; blood and genetic testing; wellness and nutrition counseling; ongoing monitoring from wearable sensors provided at the clinic;  support and access to its AI and 24/7 access to medical staff through the app; are available exclusively to anyone who’s willing to pay the $149 per month fee.

At its launch, Aoun told us about 15 percent of its early users come from underserved communities and had received free membership. Members also get their first month of prescription medicine free through Forward’s onsite pharmacy, which will also offer vitamins and supplements.

Forward also plans to offer vitamins and other supplements and wearables through the onsite store and Aoun said he would like to offer other alternatives such as acupuncture in the future.

Opening in a small office on the first floor of the Westfield Century City mall, Forward’s Los Angeles office will contain all of the bells and whistles that brought it so much attention when it opened its first San Francisco location in January.

There are custom-built exam rooms kitted up with interactive, touch-screen displays — part of what the company touts as an integrated, paperless system for new electronic health records.

The centerpiece of the company’s facility is a purpose-built body scanner that collects basic vital signs like temperature, pulse, and arterial health that are then sent to the company’s staff doctors.

Those aren’t the only diagnostic tools. The company also has an app and is rolling out services around fertility and sleep tracking, as well as dermatological and optometry services in its two offices.

Once the scans are completed, doctors then review the results of diagnostic tests with their patients in one of those exam rooms, which is also recording the conversation with voice recognition software that targets key words to help retain the key parts of the conversation and examination.

This expansion into Southern California marks the next step in the journey that former Google executive Adrian Aoun first embarked on 18 months ago when he started building out the company’s medical devices and first office in a warehouse in San Francisco’s SoMa neighborhood.

An early entrepreneur who first came to prominence through his work building natural language processing software that would enable users to create searches for specific topics, Aoun was one of the original architects of Google’s artificial intelligence strategies and the founder of the company’s urban technology subsidiary, Sidewalk Labs.

Aoun’s attention turned to healthcare after his brother had a heart attack, he says, which led him to confront the inadequacies of the existing system.

“The existing healthcare system was not built for you,” Aoun says. “Their incentives are not to actually make you healthy and they’re certainly not to make this cheaper.”

While Forward isn’t necessarily making healthcare cheaper either, it is planting a flag for making healthcare better, Aoun says. And he thinks that’s the first step to changing the whole system.

“It’s absurd to think that the disruption is going to come from the inside,” he says.

The problem for Aoun, is that existing healthcare solutions can’t “scale” because treatment depends on highly skilled medical professionals (and there’s a shortage of those these days).

“We need to figure out how to scale doctors so that they touch more lives… The same way an engineer can scale through software,” he says.

Aoun sees Forward as building the tools that other companies can then use to drive down costs and bring the solutions that his company is developing to a larger market.

And, he argues, the Forward price tag isn’t all that expensive. “$149 per month is about half the price of a fancy gym,” he says. “We have to start somewhere.”

While Forward doesn’t talk about its financing, it has secured investments from some of Silicon Valley’s marquee investors and entrepreneurs.

Postmates launches in first international city

On-demand platform Postmates is launching in Mexico City, its first international market, today with more than 1,000 merchant partners and couriers on board. That means people who live in areas like Polanco, Condesa, Juárez, Cuauhtémoc, Roma, Jardines Del Pedregal, Lomas de Tecamachalco and Ciudad Universitaria will be able to access on-demand delivery via Postmates.

“We believe Mexico City is a perfect market for Postmates to offer its on-demand delivery service,” Postmates co-founder and CTO Sean Plaice said in a statement. “It is one of the largest urban consumer markets in the world — with a vibrant economy and food scene. In the U.S., Postmates is known for our reliable network and an intuitive app experience, which makes it incredibly easy to unlock the best food within your city. We realize our name — Postmates — is a bit difficult to pronounce in Spanish. But we guarantee that will be the hardest part of your Postmates experience!”

In the U.S. Postmates has 100,000 couriers completing over 2.5 million deliveries every month. By launching in Mexico City, Postmates will be directly competing with the likes of UberEATS and SinDelantal (owned by JUST EAT and iFood) in the areas of food delivery. With grocery delivery, which Postmates recently put more investment into in the U.S., Postmates will compete with startups like Cornershop and Mercadoni.

In order to attract customers, Postmates is giving 1,000 pesos worth of delivery credits for the month. After that, delivery fees will cost 25 pesos per order.

Adeptmind raises $4.5M from Fidelity to bring smarter search to retailers

If you’ve ever searched for a product on any website that’s not Amazon or Google, you’ve probably had a bad time trying to find something — and then go straight back to Google or Amazon.

That’s a significant problem for retailers, which need to ensure that potential customers that are already signaling a lot of interest in buying something will actually be able to find those products and end up buying them. That’s why G Wu and Jing He started Adeptmind, a tool that gives retailers a way to implement a smarter search engine on their sites by collecting related data to all of their products and zero in on what customers are actually looking for. To do that, Adeptmind said that it has raised $4.5 million in a financing round from Fidelity Canada.

“A lot of times NLP companies will have fairly ‘comprehensive’ knowledge graphs where you do internal labeling, but most of the data comes from the product catalog,” Wu, the CEO, said. “As such anything not in the product catalog will not be understood. There’s no free lunch when it comes to machine learning. We target crawl a large portion of the web. Based on the web we do targeted crawling so any relevant information we ingest.”

Here’s an example they gave: when searching for “ripped jeans” on a website like Diesel, you might not end up with the right results and a lot of regular jeans because they’re just not recognizing the “ripped” modifier is something that’s meant to exclude results. Adeptmind crawls around the internet in various places, such as even forums, to determine what products various potential customers are cross-referencing when related to the phrase “ripped jeans” and then uses that to narrow down the list of products to what customers actually want.

Those queries, as a result, can theoretically get as complicated as the ones you might rattle off to a service like Hound or Siri just to test the limits of its capabilities. You might go to some kind of a jacket website and stretch the search out to an extremely narrow subset of products and demographics, and Adeptmind’s pitch is that it’ll still be able to turn up the proper results based on its efforts to build a language graph around products that’s more robust than just keyword search.

That’s the pitch for the company when they walk into an office and try to sell into larger businesses, where you have to be able to pull out a laptop and show that the technology actually works. The goal, eventually, would be to be able to offer retailers the way to simply say “give me a search engine” and plug directly into Adeptmind right away as it begins chugging away at building a language graph around those products.

To be sure, it’s not entirely clear that major retailers would end up buying into this, especially after they’ve negatively trained consumers to just pop over to Google or Amazon to find a product because of poor janky search engines. It’s an uphill battle, and because the data is grabbed from around the web, there may be other companies that look to build a similar kind of language graph around products that they could sell into retailers. The goal for Adeptmind, VP of product Yoav Artzi said, is to just convince those retailers that the unsupervised nature of the product will end up giving them the best results — and, also, that they’re first to get into those retailers.

“A lot of times NLP services tend to be consulting in nature,” Artzi said. “You build out a system with people spending three or four months, and then you have to do another store and spend another three or four months. Eventually, you’re bounded by linear growth. You don’t have to spend a lot of effort if your system is able to support them through unsupervised learning. We ingest the catalog and get to very high accuracy very quickly. That was harder to do pre-deep learning, so we’re catching the front end of deep learning and NLP.”

Amazon reportedly blames the U.S. Postal Service for Amazon Fresh issues

Amazon is blaming the U.S. Postal Service for having to shut down Fresh in some areas, Recode reports. Internally, sources told Recode that Amazon is saying USPS was responsible for making the deliveries in most of the affected areas.

As the story goes, Amazon is throwing shade at the USPS, telling food brands the USPS wasn’t reliable in delivering the food on time or at all, Recode reports. Amazon also reportedly told brands that the economics of the business were harder in those areas because they were not very densely populated.

Earlier this month, Amazon Fresh halted its services in parts of nine states nationwide. Amazon, however, declined to comment just how many neighborhoods were affected.

 

The shutdown came of Fresh in some parts of the country came a few months after Amazon bought Whole Foods for $13.7 billion, though, Amazon said what’s happening with Fresh is unrelated to the acquisition.

I’ve reached out to Amazon and will update this story if I hear back.

Why work collaboration startups keep drawing massive valuations

In Silicon Valley, trends come and go, and with them go the investors and entrepreneurs. That common wisdom is borne out by most investment categories: social networks, e commerce and cleantech, to name a few. But not work collaboration startups.

Take this past May, when the Slack competitor Symphony raised money at a $1 billion valuation, and enterprise work management platform Smartsheet was funded at an $850 million valuation, within a week of each other. This was six years after Dropbox got its $4 billion valuation, with many other collaboration startups ballooning up in between.

Collaboration remains a successful area for investment because it presents significant opportunities for businesses who must relentlessly seek new efficiencies to remain competitive. Though this is an increasingly crowded field, the best products are those that simplify, streamline, and speed up existing work processes – freeing up bandwidth for higher-value work.

The never-ending quest for efficiency

“Perhaps there are forty, fifty, or a hundred ways of doing each act in each trade,” wrote Frederick Winslow Taylor more than a century ago, “but there is always one method and one implement which is quicker and better than any of the rest.” The same is true today, although these days, there seems to be a thousand ways to complete any given task. Email proved to be better than faxes or memos at sharing information quickly and efficiently; later, more software appeared for related uses, such as Skype and Basecamp. In the third generation of such tools, Slack replaced some features of email and Skype by combining elements of both: chat with archiving, notification, and search capabilities.

The next frontier is work management: a broadening of the category into new functionality that enables not just real-time communication, but real-time collaboration and visibility. The goal is to eliminate silos between work groups – both internal and external – and provide clear accountability for tasks throughout a process or project. The best-in-class in this industry are products that are intuitive to use, and enable business users to benefit from them without the need to write code or seek help from IT to implement.

Automation comes of age

Now, some companies are leapfrogging even further ahead to layer on powerful automation that shaves hours off the average worker’s work week by eliminating repetitive, manual tasks. Smartsheet, for instance, has recently released automation features that workers can configure themselves, with things like reminders and requests for updates and approvals triggered automatically. The potential upside of this functionality can be significant; in a recent survey commissioned by Smartsheet, more than 40 percent of respondents estimated they spend at least a quarter of their work week on repetitive tasks. By reducing the load of emails and “status update” meetings, businesses may not only recoup valuable worker hours, they may see an improvement in worker satisfaction, too.

Collaboration companies keep getting unicorn-scale valuations because their innovations answer the growing need for work efficiency. In an economy where companies run on single-digit margins, the quest for “quicker and better” is nearly infinite.

There’s a whole universe of ways that startups can help enterprises improve work processes and speed execution. The recent funding of Smartsheet, Slack, and Symphony is a harbinger of what’s next. For as long as knowledge work continues evolving, work management will remain the tech trend that never dies.

Facebook open sources Open/R distributed networking software

Facebook is no stranger when it comes to open sourcing its computing knowledge. Over the years, it has consistently created software and hardware internally, then transferred that wisdom to the open source community to let them have it. Today, it announced it was open sourcing its modular network routing software called Open/R, as the tradition continues.

Facebook obviously has unique scale needs when it comes to running a network. It has billions of users doing real-time messaging and streaming content at a constant clip. As with so many things, Facebook found that running the network traffic using traditional protocols had its limits and it needed a new way to route traffic that didn’t rely on the protocols of the past,

“Open/R is a distributed networking application platform. It runs on different parts of the network. Instead of relying on protocols for networking routing, it gives us flexibility to program and control a large variety of modern networks,” Omar Baldonado, Engineering Director at Facebook explained.

While it was originally developed for Facebook’s Terragraph wireless backhaul network, the company soon recognized it could work on other networks too including the Facebook network backbone, and even in the middle of Facebook network, he said.

Given the company’s extreme traffic requirements where the conditions were changing so rapidly and was at such scale, they needed a new way to route traffic on the network. “We wanted to find per application, the best path, taking into account dynamic traffic conditions throughout the network,” Baldonado said.

But Facebook also recognized that it could only take this so far internally, and if they could work with partners and other network operators and hardware manufacturers, they could extend the capabilities of this tool. They are in fact working with other companies in this endeavor including Juniper and Arista networks, but by open sourcing the software, it allows developers to do things with it that Facebook might not have considered, and their engineering team finds that prospect both exciting and valuable.

It’s also part of a growing trend at Facebook (and other web scale companies) to open up more and more of the networking software and hardware. These companies need to control every aspect of the process that they can, and building software like this, then giving it to the open source community lets others bring their expertise and perspective and improve the original project.

“This goes along with movement toward disaggregation of the network. If you open up the hardware and open up the software on top of it, it benefits everyone,” Baldonado said.

Study: Russian Twitter bots sent 45k Brexit tweets close to vote

To what extent — and how successfully — did Russian backed agents use social media to influence the UK’s Brexit vote? Yesterday Facebook admitted it had linked some Russian accounts to Brexit-related ad buys and/or the spread of political misinformation on its platform, though it hasn’t yet disclosed how many accounts were involved or how many rubles were spent.

Today the The Times reported on research conducted by a group of data scientists in the US and UK looking at how information was diffused on Twitter around the June 2016 EU referendum vote, and around the 2016 US presidential election.

The Times reports that the study tracked 156,252 Russian accounts which mentioned #Brexit, and also found Russian accounts posted almost 45,000 messages pertaining to the EU referendum in the 48 hours around the vote.

Although Tho Pham, one of the report authors, confirmed to us in an email that the majority of those Brexit tweets were posted on June 24, 2016, the day after the vote — when around 39,000 Brexit tweets were posted by Russian accounts, according to the analysis.

But in the run up to the referendum vote they also generally found that human Twitter users were more likely to spread pro-leave Russian bot content via retweets (vs pro-remain content) — amplifying its potential impact.

From the research paper:

During the Referendum day, there is a sign that bots attempted to spread more leave messages with positive sentiment as the number of leave tweets with positive sentiment increased dramatically on that day.

More specifically, for every 100 bots’ tweets that were retweeted, about 80-90 tweets were made by humans. Furthermore, before the Referendum day, among those humans’ retweets from bots, tweets by the Leave side accounted for about 50% of retweets while only nearly 20% of retweets had pro-remain content. In the other words, there is a sign that during pre-event period, humans tended to spread the leave messages that were originally generated by bots. Similar trend is observed for the US Election sample. Before the Election Day, about 80% of retweets were in favour of Trump while only 20% of retweets were supporting Clinton.

You do have to wonder whether Brexit wasn’t something of a dry run disinformation campaign for Russian bots ahead of the US election a few months later.

The research paper, entitled Social media, sentiment and public opinions: Evidence from #Brexit and #USElection, which is authored by three data scientists from Swansea University and the University of California, Berkeley, used Twitter’s API to obtain relevant datasets of tweets to analyze.

After screening, their dataset for the EU referendum contained about 28.6M tweets, while the sample for the US presidential election contained ~181.6M tweets.

The researchers say they identified a Twitter account as Russian-related if it had Russian as the profile language but the Brexit tweets were in English.

While they detected bot accounts (defined by them as Twitter users displaying ‘botlike’ behavior) using a method that includes scoring each account on a range of factors such as whether it tweeted at unusual hours; the volume of tweets including vs account age; and whether it was posting the same content per day.

Around the US election, the researchers generally found a more sustained use of politically motivated bots vs around the EU referendum vote (when bot tweets peaked very close to the vote itself).

They write:

First, there is a clear difference in the volume of Russian-related tweets between Brexit sample and US Election sample. For the Referendum, the massive number of Russian-related tweets were only created few days before the voting day, reached its peak during the voting and result days then dropped immediately afterwards. In contrast, Russian-related tweets existed both before and after the Election Day. Second, during the running up to the Election, the number of bots’ Russian-related tweets dominated the ones created by humans while the difference is not significant during other times. Third, after the Election, bots’ Russian-related tweets dropped sharply before the new wave of tweets was created. These observations suggest that bots might be used for specific purposes during high-impact events.

In each data set, they found bots typically more often tweeting pro-Trump and pro-leave views vs pro-Clinton and pro-remain views, respectively.

They also say they found similarities in how quickly information was disseminated around each of the two events, and in how human Twitter users interacted with bots — with human users tending to retweet bots that expressed sentiments they also supported. The researchers say this supports the view of Twitter creating networked echo chambers of opinion as users fix on and amplify only opinions that align with their own, avoiding engaging with different views.

Combine that echo chamber effect with deliberate deployment of politically motivated bot accounts and the platform can be used to enhance social divisions, they suggest.

From the paper:

These results lend supports to the echo chambers view that Twitter creates networks for individuals sharing the similar political beliefs. As the results, they tend to interact with others from the same communities and thus their beliefs are reinforced. By contrast, information from outsiders is more likely to be ignored. This, coupled by the aggressive use of Twitter bots during the high-impact events, leads to the likelihood that bots are used to provide humans with the information that closely matches their political views. Consequently, ideological polarization in social media like Twitter is enhanced. More interestingly, we observe that the influence of pro-leave bots is stronger the influence of pro-remain bots. Similarly, pro-Trump bots are more influential than pro-Clinton bots. Thus, to some degree, the use of social bots might drive the outcomes of Brexit and the US Election.

In summary, social media could indeed affect public opinions in new ways. Specifically, social bots could spread and amplify misinformation thus influence what humans think about a given issue. Moreover, social media users are more likely to believe (or even embrace) fake news or unreliable information which is in line their opinions. At the same time, these users distance from reliable information sources reporting news that contradicts their beliefs. As a result, information polarization is increased, which makes reaching consensus on important public
issues more difficult.

Discussing the key implications of the research, they describe social media as “a communication platform between government and the citizenry”, and say it could act as a layer for government to gather public views to feed into policymaking.

However they also warn of the risks of “lies and manipulations” being dumped onto these platforms in a deliberate attempt to misinform the public and skew opinions and democratic outcomes — suggesting regulation to prevent abuse of bots may be necessary.

They conclude:

Recent political events (the Brexit Referendum and the US Presidential Election) have observed the use of social bots in spreading fake news and misinformation. This, coupled by the echo chambers nature of social media, might lead to the case that bots could shape public opinions in negative ways. If so, policy-makers should consider mechanisms to prevent abuse of bots in the future.

Commenting on the research in a statement, a Twitter spokesperson told us: “Twitter recognizes that the integrity of the election process itself is integral to the health of a democracy. As such, we will continue to support formal investigations by government authorities into election interference where required.”

Its general critique of external bot analysis conducted via data pulled from its API is that researchers are not privy to the full picture as the data stream does not provide visibility of its enforcement actions, nor on the settings for individual users which might be surfacing or suppressing certain content.

The company also notes that it has been adapting its automated systems to pick up suspicious patterns of behavior, and claims these systems now catch more than 3.2M suspicious accounts globally per week.

Since June 2017, it also claims it’s been able to detect an average of 130,000 accounts per day that are attempting to manipulate Trends — and says it’s taken steps to prevent that impact. (Though it’s not clear exactly what that enforcement action is.)

Since June it also says it’s suspended more than 117,000 malicious applications for abusing its API — and say the apps were collectively responsible for more than 1.5BN “low-quality tweets” this year.

It also says it has built systems to identify suspicious attempts to log in to Twitter, including signs that a login may be automated or scripted — techniques it claims now help it catch about 450,000 suspicious logins per day.

The Twitter spokesman noted a raft of other changes it says it’s been making to try to tackle negative forms of automation, including spam. Though he also flagged the point that not all bots are bad. Some can be distributing public safety information, for example.

Even so, there’s no doubt Twitter and social media giants in general remain in the political hotspot, with Twitter, Facebook and Google facing a barrage of awkward questions from US lawmakers as part of a congressional investigation probing manipulation of the 2016 US presidential election.

A UK parliamentary committee is also currently investigating the issue of fake news, and the MP leading that probe recently wrote to Facebook and Twitter to ask them to provide data about activity on their platforms around the Brexit vote.

And while it’s great that tech platforms finally appear to be waking up to the disinformation problem their technology has been enabling, in the case of these two major political events — Brexit and the 2016 US election — any action they have since taken to try to mitigate bot-fueled disinformation obviously comes too late.

While citizens in the US and the UK are left to live with the results of votes that appear to have been directly influenced by Russian agents using US tech tools.

Today, Ciaran Martin, the CEO of the UK’s National Cyber Security Centre (NCSC) — a branch of domestic security agency GCHQ — made public comments stating that Russian cyber operatives have attacked the UK’s media, telecommunications and energy sectors over the past year.

This follow public remarks by the UK prime minister Theresa May yesterday, who directly accused Russia’s Vladimir Putin of seeking to “weaponize information” and plant fake stories.

The NCSC is “actively engaging with international partners, industry and civil society” to tackle the threat from Russia, added Martin (via Reuters).

Asked for a view on whether governments should now be considering regulating bots if they are actively being used to drive social division, Paul Bernal, a lecturer in information technology at the University of East Anglia, suggested top down regulation may be inevitable.

“I’ve been thinking about that exact question. In the end, I think we may need to,” he told TechCrunch. “Twitter needs to find a way to label bots as bots — but that means they have to identify them first, and that’s not as easy as it seems.

“I’m wondering if you could have an ID on twitter that’s a bot some of the time and human some of the time. The troll farms get different people to operate an ID at different times — would those be covered? In the end, if Twitter doesn’t find a solution themselves, I suspect regulation will happen anyway.”

Google adds a slew of new Assistant features for app developers

In order for Google Assistant to be a real contender against the likes of Alexa, it needs third party app support. In order to get more developers on-board, the company needs to add a lot more features to incentivize development, even as Amazon’s home AI continues to dominate marketshare. This morning Google took a key step toward making Assistant a more compelling experience, announcing a boatload of new features for app developers, including a new push notifications, daily updates and additional language support.

One of the more interesting new features on board is speaker-to-phone transfer, a new API that makes it possible to start an action on a Google Home speaker and complete it on the phone. So users can, say, order food through the speaker and get the receipt on a screen. That addition is keeping on Google’s push to grow Assistant beyond just a simple voice interface.

It could also, perhaps, lay the foundation for the Echo Show competitor we’ve all been waiting for from the company.

Push notifications are a biggie too, for obvious reasons. That new API means that apps can send important updates to users on the phone, with Google Home spoken functionality coming down the road. Also new here is the addition of a For Families badge, designating which apps are okay for the little ones and support for additional languages, including Spanish, Italian and Brazilian Portuguese.

The ability to link accounts in the app has been improved as well — in the earlier build, users could only do it before engaging with the app. Now it can be accomplished whenever it’s most convenient. Oh, and the updated version of the Cancel command lets the app send a user a polite farewell before logging off, because courtesy is important, even for smart assistants.

The new features come roughly a month after Google added a number of new additions to its Home family of smart speakers, along with the Pixel Buds, which have recently started shipping. The new additions will, hopefully, give developers enough time to ramp up their app experiences ahead of the holidays.

Black Friday will be the biggest mobile shopping day ever in the U.S., forecast claims

A new report from App Annie predicts that time spent doing mobile shopping via apps will grow 45 percent in the U.S. during the week of Black Friday, compared to the same time two years ago. The firm also expects revenue generated through apps to break new records this season, and says consumers will spend over 6 million hours shopping in the top 5 digital-first apps on Black Friday alone.

App Annie’s forecast is based on data from Android devices in the U.S., as it doesn’t have visibility into iOS in the same way.

The news follows an earlier forecast claiming mobile shopping visits will top the desktop for the first time this holiday season.

According to App Annie, the 6 million-plus hours spent on Black Friday in the top five digital-first apps (e.g. apps from companies like Amazon, Wish, Etsy and Zulily that only exist online) represents a 40 percent increase over just last year.

That also means that on Black Friday – November 24, 2017 – these top five apps will account for 15 percent of the total time spent in shopping apps during the entire Black Friday week (Nov. 19-25).

Meanwhile, other top shopping apps that App Annie dubs the “bricks-and-clicks” apps – meaning those where the retailer has both an online and brick-and-mortar presence – will also see some growth, though not as strong. Top bricks-and-clicks apps include those from retailers, like Target (which has 2 apps), Walmart, Walgreens, and Kohl’s, for example.

The firm predicts the top five apps in this group will see 30 percent growth in time spent on Black Friday 2017, compared to Black Friday 2016.

Combined with the expected increases in mobile shopping revenues generated in the apps, App Annie believes Black Friday 2017 will be the biggest mobile shopping day ever in the U.S.

Black Friday may also lead to a ripple effect in mobile e-commerce around the world, the report points out.

As with the traffic increases seen on Amazon’s Prime Day, the total time spent in shopping apps outside the U.S will also increase this year. In Japan, the time spent in shopping apps on Android will be up 65 percent from 2 years ago to over 15 million hours; the U.K. will see a 45 percent increase to over 6 million hours.

This year, AliExpress may also see significant usage during Black Friday week. The app already snagged the number one spot for shopping apps across iOS and Google Play ahead of Singles’ Day (Nov. 11) in the U.K., France, and Germany.

Separately, the firm Sensor Tower noted AliExpress has just achieved a milestone here in the U.S. as well – it hit the top of the U.S. iPhone chart for the first time on November 12, 2017. (Its previous peak had been #51 back on March 23.)

App Annie had previously reported the growth in mobile shopping in general here in the U.S., noting that consumers were now spending 10 hours a year in these apps.