Turkey’s Ministry of Justice is launching a new artificial intelligence (AI) tool called the CBS Organizational Prediction Project. This tool is supposed to help the justice system by using AI to find links between new legal cases and known terrorist groups. But many experts are worried it could cause serious problems.
Why People Are Concerned:
- The AI might accuse innocent people of terrorism without solid proof.
- It tags cases or individuals based on patterns in data, which could unfairly affect someone before a judge even looks at the case.
- It may bias judges from the beginning by labeling someone as connected to terrorism.
How It Works:
The AI is part of Turkey’s court system (called UYAP). It checks new case information and compares it to a database of known terrorist groups. If it finds something similar, it flags the case. Officials say this helps with data accuracy and reporting, especially to international groups like FATF (Financial Action Task Force).
Risks and Problems:
- The tool could break the legal rule that says people are innocent until proven guilty.
- It could harm someone’s reputation or affect their trial unfairly.
- There’s a lack of transparency – people don’t know how the AI makes its decisions.
- If judges trust the AI too much, they might not use their own judgment.
- The data the AI uses may contain bias, which can lead to unfair treatment, especially for minority groups.
Privacy Issues:
- The tool uses a huge database that includes personal information.
- The Ministry says the data is anonymous, but past data breaches in Turkey raise concerns.
- In the past, hackers stole millions of people’s data from government systems, including voter records and health data.
Legal and Ethical Issues:
- If the AI leads to a bad court decision, it’s unclear who is responsible – the AI developers, the judge, or the government?
- AI tools like “Ez Cümle” that summarize legal texts or help write verdicts may make the justice system too mechanical.
- Some projects try to predict future crimes or group connections, which is similar to predictive policing, a system criticized in other countries for violating privacy and targeting certain groups unfairly.
Oversight Problems:
- There is an ethics committee, but it works inside the same group that made the AI tools, which limits independent oversight.
- The Ministry has only 11 technical staff working on AI, which is not enough for such an important and sensitive job.
- A law professor warned that Turkey’s current AI laws are too narrow and might not work for all types of AI systems.
Global Concerns:
Turkey is already ranked low on the World Justice Project’s Rule of Law Index (117 out of 142 countries). Experts believe using AI in this way, without clear rules and protections, could make things worse by increasing unfairness in the justice system.
Source: nordicmonitor