vendredi 27 janvier 2017

AI in Smartphones: Separating Fact From Fiction, and Looking Ahead

As it goes every year, one hot feature sets a trend in technology, and suddenly every company boasts some variation of that which is uniquely theirs. This year, that feature is AI. Hot on the heels of Alexa’s and Google Assistant’s holiday successes, Artificial Intelligence on phones has become the de facto must-have feature – whether consumers know it or not. In any case, manufacturers seem not to realize that AI doesn’t mean “Anything Intuitive” – that’s just how operating systems are supposed to be. Yet it seems that OEM’s are eager to label nearly any vaguely intuitive feature as AI. As this trend will no doubt continue, it’s important to take a moment to separate fact from fiction.


What is AI, really?

Before we dive in, let’s outline some definitions and distinctions regarding the field of AI. AI breaks down into two main categories: General AI and Narrow AI. General AI, in theory, is meant to replicate human consciousness and resemble a sentient being – think I, Robot or Terminator – while Narrow AI is used to achieve a specific task or reasoning skill. General AI is still some time off from realization, so Narrow AI will be the focus here. The graphic below illustrates the evolution and tiers within.

Image Credit: Nvidia Blogs

In the earliest formations of AI, Arthur Samuel – the man who coined the term “Machine Learning” – programmed a computer to play checkers. Samuel used algorithms based on the piece positioning, the number of pieces, and the proximity of pieces to “king” spots, among other things. This was the basis of early AI, which soon would cross over into Machine Learning. As the program continued to develop, it gained the ability to “learn” from previous situations – going on to play thousands of games against itself to improve on its own skill – the same basic mechanism by which Machine Learning works today. Deep Learning, the most recent evolution of AI, transcends another level by leveraging Neural Networks, which enable computers to process data including pictures, text, and numbers to then draw conclusions. Neural Networks use layers of Machine Learning components (often referred to as Neurons) to process and “learn” information in much the same way as the human brain, where repetition and variety is key. With the right algorithms, hardware, and the wealth of Big Data that now exists, Neural Networks have become very capable and efficient in absorbing large data sets – completing tasks and indeed learning from each of these.

Image Credit: Google Research Blogs


For example, instead of playing Checkers, a computer may be tasked with recognizing a picture of a Checkers game. Having been “trained” by processing thousands of pictures of Checkers games, the layers within the Neural Network assign values to the probability or “confidence” that the present picture has the particular attributes of a Checkers game. Each layer may be in charge of recognizing a certain attribute, such as its square shape, checkered pattern, the color of the pattern, the position or shape of the pieces and much more. If these attributes have a high probability of presence, then the network may determine, with “X” degree of “certainty”, that the picture is in fact a Checkers game. Although Machine Learning alone has reached similar capabilities, Neural Networks have lessened the need for lengthy, explicit coding while also improving accuracy, efficiency, and overall capability.

Though voice-assistants such as Siri, Cortana, and Google Assistant are commonly known to utilize Neural Networks to improve their speech recognition, they have shown themselves to otherwise be quite limited. Generally, these voice-assistants do little more than input/output for applications and web searches – learning little, if anything, about the user in the process. Even these well-funded and continually developed assistants have considerable room to improve on their automation, intelligence, and learning capacity. Given that, what level of useful integration can we expect from smaller OEM’s who just now have entered the AI arena?

Huawei Honor Magic

The Honor Magic breaks new ground by incorporating artificial intelligence designed to understand and analyze users’ data in order to provide intelligent interactive features.

Go on.

To further improve user experience, the Honor Magic houses the Honor Magic Live system, which anticipates users’ needs and facilitate their daily lives by offering a whole host of predictive information. The Honor Magic Live system is, for instance, able to formulate a range of customized recommendations based on users’ social conversations via instant messaging apps– conversations revolving around movies will trigger blockbuster recommendations.

Honor Magic’s Smart Display proactively retrieves and displays practical information, anticipating users’ need. For example, ordering a cab with Honor Magic will trigger the driver’s license plate number to be displayed on the screen.

Though these features can be useful, they are plainly not AI. While one could label them as “intuitive” or even “smart,” bringing up boarding passes at the airport or pulling shipping information from emails has existed in phones for years now and requires no degree of “learning” or adapting. Huawei could conceivably implement a system to gather and “learn” from user data to improve efficacy over time, but this might be overkill for such a feature.  Furthermore, this likely would not track well in countries other than China, as Google Now already offers these features in just about every other country. It seems Huawei engineers are flirting with the voice-assistant industry, but rather than creating a full-blown voice-assistant to unify these features they have chosen to skip this step entirely and move directly to touting the phone as AI-driven.

Huawei Mate 9

The Huawei Mate 9 automatically learns user habits to build app priority. The system anticipates the user’s next moves to prepare resources in advance. This process is run on the phone, not the cloud for better performance and privacy protection.

This is a tricky one – or so it attempts to be. Learning from user habits to anticipate the next app to be opened and pre-emptively pooling resources for it does technically fall under the umbrella of Machine Learning, albeit at a very basic level. However, boasting that “This process is run on the phone, not the cloud for better performance and privacy protection,” is quite misleading. In some applications of Machine Learning, extremely large data sets are stored on the cloud so that machines with much greater capabilities can process the data quickly and relay that information to your device. In the case of predicting the next app the user will open; the corresponding data set is extremely small and would never involve the cloud in any practical application of this feature. This bit seems to be pandering to consumer security concerns more than anything else.

The Huawei Mate 9 automatically manages resources by prioritizing CPU, RAM and ROM for optimal performance and closes memory intensive background apps. Within the CPU, fine-grained scheduling maximizes computing efficiency. For RAM, it automatically recycles memory resources, creating more memory for priority apps and enabling stutter-free performance. And for ROM, it opens an exclusive channel of Storage I/O making the apps you use the most work even faster.

Huawei’s track-record with memory management is not a great one. Previously, they utilized a very basic system that informs the user of the most power-heavy background apps, then closes them. It seems this feature has become less obtrusive, although minimally effective all the same. Beyond this, attempting to achieve “stutter-free performance” through such means is generally unnecessary. As we’ve seen, more substantive gains in performance can be made traditionally, through proper hardware/software pairings as well as optimizations to framework and design.

The new F2FS file system improves the I/O storage performance. This speeds up the database engine allowing pictures to load more smoothly. The optimized performance of the rendering engine gives better control and a faster reaction to your touch.

This is the true catalyst for increased performance. Much like the optimizations in Android 7.1, most notably seen in the Google Pixel’s buttery-smooth performance in touch latency and responsiveness, the rendering tweaks in the Mate 9, the pairing of F2FS on UFS 2.1, and the highly-capable Kirin 960 SoC are the true engines behind excellent system performance –  not AI.

HTC U Ultra/Play

Details on HTC’s AI endeavors are still scarce, especially on their own website. As such, the following information has been gathered by Gadgets360, based on their time with HTC representatives and the U handset at CES this year.

With the new HTC U Ultra and HTC U Play, the company is betting big on it’s new AI assistant called Sense Companion, which it claims will learn your usage behaviour over time in order to present you with priority notifications and alerts based on people you contact the most.

According to HTC, you’ll need to manually perform an initial setup of Sense Companion on the HTC U Ultra and HTC U Play, which involves adding your favourite contacts and apps in order to ‘train’ the AI, after which it’s supposed to automatically manage this for you…the AI will be able to alert you if your phone needs a charge, depending on your schedule for the day.

HTC AI will be able to understand your consumption patterns as well. For instance, rather than simply recommending restaurants around you, it will learn how you order food – based on restaurant ratings and proximity – and over time, when it’s gathered enough data, it will offer prompts to the places you are most likely to order from. The same goes for the weather. Instead of alerting you with weather alerts every day or hour, Sense Companion will only alert you when the weather is unusual.

As this information is based on a third-party’s understanding, details may be incomplete or otherwise misinterpreted. We certainly hope this is the case. The only mention of actual learning within this entire write-up is in reference to HTC Sense Companion’s ability to recognize your “consumption patterns.” This presumably means your choice of restaurants, stores, or other places where goods can be purchased. The scope of this may be smaller though, as it seems that third-party application support is a necessity for effective implementation. This aside, the only other instance where learning could be utilized – prioritizing notifications – has effectively removed AI from the equation by requiring the user to manually input their favorites. There’s absolutely nothing wrong with this approach – save for labeling it AI. How HTC justifies labeling alerts for unusual weather as artificial intelligence is beyond comprehension, though we do hope future revelations will add clarity and justify this selling point.

LG/Samsung

LG has recently indicated that they would like to leverage AI in their next phone but with rumors of Google Assistant integration on their upcoming phones and Alexa support already on their other electronics, it is unclear to what extent we will see AI in LG devices. Samsung, on the other hand, seems to be releasing their own “AI” assistant – Bixby. Built on technology developed by Viv, an AI company most notable for creating Siri and recently being acquired by Samsung, Bixby has some serious potential. Viv has shown itself to be capable of answering queries as complex as “Will it be warmer than 70-degrees near the Golden gate bridge after 5pm the day after tomorrow?” and much more. This proficiency in sophisticated queries coupled with the creators’ commitment to third-party application integration certainly creates the potential to launch Bixby into the upper echelon of smartphone AI. Nevertheless, Bixby has yet be officially announced, though early reports indicate the ability to interact with native apps, conduct mobile payments, and of course search the web. The smartest feature offered by Bixby, so far, appears to be a Google Googles-like function that allows the camera to be used as an input to search the web. More details on this will surely emerge, but until the Galaxy S8 launches, speculation will continue to be just that – speculation.

Honorable Mention: Facebook

Just because Facebook doesn’t have mobile hardware doesn’t mean Mark Zuckerberg and company are out of the game. Facebook has created Neural Networks of their own, not just for facial recognition in photos, but also in a platform called Caffe2Go. This platform can capture, analyze and process pixels in real time on a mobile device, effectively transferring stylistic features from a given painting to every single frame in a live video. With Oculus under its wing, the innovation is unlikely to stop there. Improvements in VR experiences and the creation of a computer with “common sense” are just a couple points mentioned in Facebooks recent manifesto. If the world’s fifth richest man has something to say about AI, you will certainly hear it. Expect some significant impacts on AI from the Facebook camp in the coming years, as well.

Facebook’s Caffe2Go AI Algorithm. (Credits: CIO Today)

Tasker

Given the high prevalence of automation in these so-called “AI” features, it would be remiss to not mention Tasker. Tasker is essentially IFTTT for local applications and functions on your phone, but with a good amount more customizability, and thus potential, especially with its extensive repertoire of plugins. Priced at $2.99 in the Play store, Tasker does not require root access (although some actions do necessitate it) and enables you to automate a myriad of situations. From setting your phone to read texts aloud when you’re in the car to creating a mobile hotspot monitor, Tasker has a seemingly endless amount of automated options. A compiled list of some of our favorite Tasker functions, replete with walk-throughs and instructions, can be found here. From what we’ve read above, Tasker could certainly be leveraged intelligently to offer similar results – in fact, solutions could be even more personalized and therefore effective.


The Future of AI (On Phones and Beyond)

With all the advances in Deep Learning, thankfully, mobile hardware has stepped up to the task. For a few years now SoCs have been evolving behind the scenes in conjunction with Deep Learning – increasing their capabilities, while decreasing their size and power consumption. For the most part, these chips were dedicated to creating Machine Learning mobile devices in healthcare and other sciences. Only very recently has the refinement of these chips become apparent, and soon, readily available to consumers in the form of Qualcomm’s Snapdragon 835 SoC. While the average Galaxy S8 buyer has little interest in using Machine Learning on an SoC (or MLSoC) to detect arrhythmia or myocardial infarctions with 95% accuracy, but would rather take a picture of El Chupacabra only to find out Bixby is 99% sure it’s a cat – both are indeed possible thanks to MLSoC’s (albeit on different systems, for now at least.)

Qualcomm has applications everywhere, even outside of mobile. A particularly cool example they briefed us on involved implementing object and context recognition in baby monitors/cameras. Updates or alerts can then be sent to the parents regarding their baby’s status. This can be very useful as Deep Learning enables the recognition of various activities or situations. Powered by chips such as the Snapdragon 835, mobile devices that aspire to be truly adaptive and intelligent will now have the proper hardware to do so.

Speaking of hardware, one needs the proper software to utilize these capabilities. Enter Tensorflow. From the minds of the Google Brain Team, Tensorflow is essentially a pre-fabricated, open-source Neural Network, which is free to download. With this program, anyone can put together a Neural Network and input their own data to “train” it. Some data libraries also exist within the program, providing the user with tools and pre-made data sets to work with, though they can also create their own. Some level of knowledge in Python or C++ is needed, but the official website has plenty of resources even for beginners. Perhaps the best feature of this may be the use of a single API enabling compatibility across desktops, mobile devices, and even servers.

Image Credit: Qualcomm

SoCs like the Snapdragon 835 have all the proper parts to run an effective Neural Network such as those made through Tensorflow. In fact, Qualcomm has been working with Google to ensure their newest chip uses its components to their fullest potential when doing so. Utilizing the CPU and the DSP instead of just the CPU or GPU, the 835 has shown great potential and solid performance in Machine Learning – all before ever touching the inside of a commercially available phone.

The Wave Has Just Begun

Much to the chagrin of AI purists or those who value truth in advertising, unsubstantiated claims of Artificial Intelligence in certain smartphones are likely to continue and even grow. So few of these devices can rightfully say they learn and adapt in any way and most tout features that amount to little more than discreetly coded automation. Try as these companies might to obfuscate the true power of this technology, a real AI uprising is upon us. Breakthroughs in Machine Learning, coupled with rapidly advancing mobile technologies have brought us to the point where legitimate Neural Networks can begin to run directly on mobile devices, without the cloud. The implications of this are large and far-ranging, impacting everything from modern medicine to how you find pictures you’ve taken, and everything in between. Manufacturers claiming to harness AI simply want to be aligned with this sweeping movement – and given the potential, who could blame them?


Who/what do you have your eye on in the AI wars? Let us know in the comments below!



from xda-developers http://ift.tt/2jFKqIy
via IFTTT

Aucun commentaire:

Enregistrer un commentaire