AI CZATBOTS Get a lot of data about us
Using chatbots And It can be dangerous in connection with the amount of data that they collect about us. – companies They collect information based on promptswhich we enter (queries – ed.), in which there may be data, e.g. about the language we use, appearance in the case of photos, views, family situation, problems, etc. Other companies additionally ask for access to information from our device, e.g. contacts, location. Chinese Deepseek He also collected a way of writing on the keyboardfrom which you can extract a lot of data, e.g. age, or whether we are tired or sleepless – warns Mateusz Chrobok, a cyber security expert in an interview with PAP.
What to look for?
Various data can be extended from Chatbot, and if the AI model is later trained on them, which is a common practice, then their removal It will not be easy. Studies have shown that models store various information, so we must be careful Do not provide them by accident, e.g. a credit card number. It is also worth paying attention to this which chatbot we use. Chinese Deepseek is blocked in many places, because “according to Chinese law, its creators must transfer data about users to the authorities, and these, for example, can give it to services”, explains Mateusz Chrobok. It is also important that Avoid throwing company documents into chatbotwhich may contain secret data. In 2023, Samsung's non -public data leaked in such a way, after the employee threw a presentation prepared for work to Chatagpt.
With a distance to response and therapy with chatbot
Another threat that comes from using chatbot is profiled ads. If we treat chatbot as a therapist, there is a threat that advertisers will use information about our mental state to influence our purchasing decisions, which will not necessarily be beneficial to us. In addition, there is already one case of a man who committed suicide after talking to Czatbot. – This is an extremewhich, however, shows what can threaten us when we ask AI for advice, we share our mental state and views with her. The answers he gives are based on statistics, which means that they will not always be accurate. Thanks to the progress of technology, they are increasingly like that, which makes us trust them most often, and this can put our vigilance to sleep (…). Which is why I encourage you to skepticism In view of the content that the chatbots generate – says Mateusz Chrobok.
How to avoid problems?
– It happens that we are tempted to give our data in exchange for free access to chatbot, better model, etc. So really We give our privacy for some benefits – warns Mateusz Chrobok. At the same time, he does not think that you have to give up AI technology. – you just have to know how to use it safely – says the expert. And he explains that in the model you can, even exclude the possibility of training on our data.
Sources: PAP