South Korean giant Samsung The company has temporarily banned its employees from using popular generative AI tools like OpenAI’s ChatGPT through personal computers after it found that such services were being misused.
A memo was released in late April after Samsung discovered that some employees had uploaded sensitive code to ChatGPT. The company advised employees to exercise caution when using ChatGPT and other products outside the workplace and not to enter any personal or business-related information.
Not only Samsung, but American investment firm JP Morgan also banned the use of ChatGPT among its employees earlier this year. Amazon has advised its employees not to upload any information or code to services such as ChatGPT.
These tools can help engineers create computer code, for example, to speed up their operations and help employees reduce their workload. But entering sensitive company data into such services can be a risk for businesses, which can lead to leakage of critical information.
Meanwhile, Meta said there has been a significant increase in malware disguised as ChatGPT and similar AI tools. The company’s researchers have detected 10 malware families using ChatGPT and other similar themes to compromise accounts on the Internet since March 2023 alone. It has blocked over 1,000 such links from its platform.
The company also said that scammers often use mobile apps or browser extensions that masquerade as ChatGPT tools. While the tools may in some cases provide some ChatGPT functionality, their true goal is to steal their users’ account credentials.
Criminals frequently target users’ personal accounts to gain access to the linked business page or advertising account, both of which are more likely to be linked credit cards. While Meta is working out its plan to tackle this problem, untrustworthy chatbot apps have already invaded the App Store.
If you search ChatGPT in Google’s Play Store, you will get a long list of apps from developers. Researchers have discovered that third-party Android app stores are also promoting fake ChatGPT apps that will launch malicious malware on people’s smartphones.
It is not him. Fake apps have also been detected on the Mac App Store. Dozens of apps claiming to be OpenAI or ChatGPT apps have been found on the App Store. The developers of these apps are flooding the App Store with similar looking apps and confusing consumers with fake reviews and OpenAI’s logo.
One of the researchers found two such app developers Pixelsbay and ParallelWorld in the App Store. Apparently both have the same parent company in Pakistan. It should be noted that there is no official app from ChatGPT.
However, another security researcher pointed out on social media how a website that looks identical to the official OpenAI ChatGPT domain can infect a user’s device with malware that steals sensitive personal information.
While cyber experts would advise people to be cautious, stay away from such engagements and avoid sharing sensitive information while using trusted AI chatbots, the question of international regulation standards arises.
Fake apps and suspicious websites can be removed. But the concern with AI is much bigger than this. ChatGPT hasn’t celebrated its first birthday yet, and so far we have a list of similar alternatives, including Google’s Bard, Microsoft Bing, GitHub Copilot X (for coding), and more.
Needless to say, all this is happening at a time when countries are looking for legal procedures to rein in the AI sector which is completely unregulated and growing at a rapid pace. Even though it may sound unusual, the fact is that China, the US, the UK, the European Union, Australia and a few other countries are seeking inputs on the rules.
But some believe that the growth trajectory of this growing field will make it difficult for policy makers to establish a stable legal framework and that a flexible structure will be needed to govern the pace of AI.
read all Latest Tech News Here