More
    spot_img
    Home5G & BeyondMicrosoft’s new Phi-3 AI model can run on an iPhone  

    Microsoft’s new Phi-3 AI model can run on an iPhone  

    -

    As Deutsche Telekom’s CEO Tim Höttges recently quipped, “Who the hell needs apps?”

    Microsoft has introduced Phi-3, a family of open AI models it claims are the most capable and cost-effective small language models (SLMs) available, outperforming models of the same size and next size up across a variety of language, reasoning, coding, and maths benchmarks.  

    The mini version, for example, is a 3.8 billion parameter language model trained on 3.3 trillion tokens, whose overall performance, as measured by both academic benchmarks and internal testing, rivals that of models such as Mixtral 8x7B and GPT-3.5, according to a research paper published by Microsoft.  

    More importantly, the “highly capable language model” was running locally on a mobile phone. “Thanks to its small size, phi3-mini can be quantised to 4-bits so that it only occupies around 1.8 GB of memory,” stated the paper. The researchers tested the quantised model by deploying phi-3-mini on iPhone 14 with A16 Bionic chip running natively on-device and fully offline achieving more than 12 tokens per second. 

    At MWC, Höttges predicted that in five or ten years from now: “nobody will use apps anymore. We will use the interface of speech, or an easy way of asking the system, and be automatically connected to the functionalities of the apps.”  

    DT demonstrated the concept on its own-branded T Phone at the show. An AI-based assistant replaces all the apps on smartphones so that people can access what they need through a “generative interface” via voice or text. DT is working with Brain.ai and Qualcomm to develop the technology in Europe and the US. 

    Earlier this month, a Worldpanel ComTech study showed that 25% of Samsung Galaxy S24 buyers say AI is key reason to buy and that Samsung and Google, which are successfully marketing “halo” artificial intelligence (AI) features in their devices, can influence consumer behaviour. AI may not be generally understood by the mass market, but it knows a great acronym when it sees one.  

    Available on Azure AI 

    In a blog post, Microsoft GenAI corporate VP Misha Bilenko said Phi-3 models significantly outperform language models of the same and larger sizes on key benchmarks. Phi-3-mini does better than models twice its size, and Phi-3-small and Phi-3-medium outperform much larger models, including GPT-3.5T.   

    He added that small language models, like Phi-3, are especially great for: resource constrained environments including on-device and offline inference scenarios; latency bound scenarios where fast response times are critical; and cost constrained use cases, particularly those with simpler tasks. 

    “Thanks to their smaller size, Phi-3 models can be used in compute-limited inference environments. Phi-3-mini, in particular, can be used on-device, especially when further optimized with ONNX Runtime for cross-platform availability,” he said. “The smaller size of Phi-3 models also makes fine-tuning or customisation easier and more affordable. In addition, their lower computational needs make them a lower cost option with much better latency.” 

    He added: “The longer context window enables taking in and reasoning over large text content—documents, web pages, code, and more. Phi-3-mini demonstrates strong reasoning and logic capabilities, making it a good candidate for analytical tasks.” 

    Still showing familiar AI weaknesses 

    In terms of LLM capabilities, while phi-3-mini model achieves similar level of language understanding and reasoning ability as much larger models, it is still fundamentally limited by its size for certain tasks. 

    The researchers found the model simply does not have the capacity to store too much “factual knowledge”, which can be seen for example, with low performance on a TriviaQA task. “However, we believe such weakness can be resolved by augmentation with a search engine,” they wrote.  

     Another weakness related to model’s capacity is that the researchers mostly restricted the language to English. “Exploring multilingual capabilities for Small Language Models is an important next step, with some initial promising results on phi-3-small by including more multilingual data,” they stated.  

    “Despite our diligent RAI efforts, as with most LLMs, there remains challenges around factual inaccuracies (or hallucinations), reproduction or amplification of biases, inappropriate content generation, and safety issues,” said the research paper. “The use of carefully curated training data, and targeted post-training, and improvements from red-teaming insights significantly mitigates these issues across all dimensions. However, there is significant work ahead to fully address these challenges.”