Smartphones are revolutionizing the field of machine learning (ML) by compressing and training AI models directly on the device. Traditionally, ML required powerful supercomputers and data centers, but now companies like Apple, Google, and Samsung are leveraging ML on smartphones to enhance image quality and enable various other ML applications.
One of the main challenges of ML on mobile devices is power consumption and limited storage space. Smartphones, while capable computers, have limited battery life and storage compared to traditional PCs. To address these challenges, ML models are compressed for mobile use. Neural architecture search (NAS) automatically tests different designs to find the most efficient and effective model size. Additionally, pruning removes redundant connections after training, reducing model size by up to 40%.
These optimizations allow ML models to be included in software development kits (SDKs) without taking up too much space. For example, ML models improve tasks like barcode recognition, document edge detection, and advanced OCR. However, the next challenge lies in enabling models to evolve based on user input.
Individualized models are gaining interest, such as large language models (LLMs) customized for internal texts and documentation. Smartphone-based personal AI assistants that learn from user habits could also become a key feature of flagship devices. To address the issue of constantly sending data to the cloud and preserve privacy, on-device training is crucial.
Transfer learning and federated learning are two approaches to achieve on-device training. Transfer learning allows pre-trained models to continuously update their parameters based on user inputs and feedback, resulting in highly specialized models. Federated learning involves multiple devices collaborating on on-device training and sharing summarized results, minimizing communication with the cloud and preserving data privacy.
While there has been initial excitement around big language models like ChatGPT, companies are now focusing on understanding and controlling the limitations of ML, such as biases and data privacy concerns. The aim is to create ML software that works offline, consumes less battery, and functions efficiently on smartphones with limited storage. Compact language models tailored to specific business domains are highly sought after, as they can learn and adapt through collaborative and transfer training.
Moving forward, the goal is to refine and optimize existing capabilities, rather than simply creating larger models. Smartphones have the potential to unleash the full power of ML, allowing users to do more with less. By ensuring smartphones are capable of running compressed ML models and enabling on-device training, the possibilities for personalized AI assistants and domain-specific knowledge repositories are endless.
As technology evolves, companies are increasingly seeking flexible solutions that they can control and customize according to their specific needs. ML on smartphones has opened up new avenues for innovation, and the focus is now on maximizing the potential within our pockets.
In the rapidly advancing field of ML, smartphones are proving to be powerful tools for running AI models and enabling on-device training. With continued optimizations and advancements, the capabilities of ML on smartphones will continue to grow, offering exciting possibilities for various industries and users alike.