AI is revolutionizing work processes: It summarizes texts, analyzes data, recognizes patterns – and will soon be (or already is) used in your company? But with the efficiency of the new technologies come new risks – especially in data protection.
What you as a manager need to know:
- Data protection laws apply fully to AI – in Switzerland as in the EU.
- Anyone who uses AI is liable for incorrect or illegal results – even if these were generated by the AI.
- If an AI model has been trained with personal data, it is not anonymous. This can have legal consequences.
- Even internally hosted AI systems are considered processing personal data if personal information can be retrieved from them.
- There is no product liability for AI – the responsibility always lies with the person, the company and its decision-makers.
A practical example:
An AI answers the question of whether the brokerage fee is waived for cash payments in a Ricardo sale – with “Yes”. This answer is wrong. But it is not the AI that is liable, but the person who used it.
Your need for action:
- Do you have a policy for the use of AI in your company?
- Are your employees trained in the responsible use of AI tools?
- Is there a clear distribution of roles for the control and release of AI-generated results?
- Have you checked whether your AI applications are data protection compliant?
Conclusion:
AI is a competitive advantage – if it is used in a controlled and compliant manner. Without clear guidelines and responsibilities, however, it can become a legal and reputational risk.
Let us work together to examine how your company can use AI safely and responsibly – before it is too late.