Apple Introduces New AI Model for Image Editing with Text Input
Tech companies are continuously exploring the potential of AI, and Apple is no different. The company has recently unveiled an open-source AI model called MLLM-Guided Image Editing (MGIE). This groundbreaking model allows users to edit images by simply providing text prompts, revolutionizing the way we interact with image editing tools.
With MGIE, users can make both minor and major adjustments to their images. For instance, suppose you want to make a pizza appear healthier by adding vegetables and herbs. By giving a text prompt to the AI model, it can seamlessly incorporate these elements into the image. Additionally, users can request the AI tool to resize, crop, rotate, or enhance the brightness, contrast, and color balance of their images.
Apple’s new AI tool is accessible to all users and can be found on Hugging Face Spaces. Developed in collaboration with the University of California, this tool harnesses the power of multimodal large language models (MLLMs) to understand the instructions provided by users.
Despite the excitement surrounding this new AI model, it remains uncertain whether Apple will incorporate it into their existing products. When it comes to AI, Apple has lagged behind its competitors, such as Google and Samsung. However, CEO Tim Cook has recently expressed the company’s commitment to investing heavily in AI in the future.
The introduction of MGIE showcases Apple’s relentless pursuit of innovation in the AI realm. By allowing users to edit images using text prompts, Apple aims to streamline the image editing process and make it more accessible to everyone. While its implementation in Apple products is uncertain, the possibilities presented by this AI tool are undoubtedly intriguing.
As the AI landscape continues to evolve, we can expect more advancements and applications of generative AI technology. With tech giants like Apple, Google, and Samsung pushing the boundaries, the future of AI holds great promise.