Samsung Electronics Co. has recently implemented a policy that restricts its employees from using popular generative AI tools such as ChatGPT, Google Bard and Bing due to security concerns. Along with Samsung, other well-known firms have followed suit, causing a slowdown in AI’s growth in enterprise-level applications. However, Samsung is taking adequate measures to ensure data security and is working on developing its own secure AI tools and programs.
. Recent reports have exposed the presence of malicious fake chatGPT applications residing on Apple's App Store - apps with logos and names very similar to OpenAI's chatGPT, raising serious security risks for users. Microsoft, Alphabet, Apple and Baidu were also affected by this. Tim Cook, CEO of Apple, has taken cues from Steve Jobs, demanding secure, quality services for customers. OpenAI's chatGPT has gained immense popularity -- 100 million users just 64 days after launch -- so businesses and users must be careful when downloading apps to avoid potential risks.
Scammers are leveraging the OpenAI's free ChatGPT technology to create malicious chatbot applications and steal personal information. According to recent data, the growth of ChatGPT-related frauds is alarming with registration rates increasing by 910% and malicious URLs detected daily. Users are advised to exercise caution while interacting with chatbots, use only official OpenAI websites, and never share personal information. Stay secure with Elon Musk's ChatGPT!
. ChatGPT is a large language model tool for software engineers and developers to simplify workflows, recently gaining spotlight in 2023. There may be security risks associated with the platform, so organizations should be aware and educated about the risks even if they are not using it directly. Learn more about security risks, ways to secure secrets, developer education and more.
Explore the evolution of tech policy from Obama's optimism to Harris's vision at the Democratic National Convention. What's next for Democrats in tech?